Configuring the CUDA backend¶
Brian2CUDA tries to detect your CUDA installation and uses the GPU with highest
compute capability by default. To query information about available GPUs,
nvidia-smi (installed alongside NVIDIA display drivers) is used.
For older driver versions (
nvidia-smi doesn’t support querying the
GPU compute capabilities and some additional setup might be required.
This section explains how you can manually set which CUDA installation or GPU to use, how to cross-compile Brian2CUDA projects on systems without GPU access (e.g. during remote development) and what to do when the compute capability detection fails.
If you installed the CUDA toolkit in a non-standard location or if you have a system with multiple CUDA installations, you may need to manually specify the installation directory.
Brian2CUDA tries to detect your CUDA installation in the following order:
Use Brian2CUDA preference devices.cuda_standalone.cuda_backend.cuda_path
Use location of
nvccto detect CUDA installation folder (needs
Use standard location
Use standard location
If you set the path manually via the 1. or 2. option, specify the parent path
nvcc binary (e.g.
nvcc is in
Depending on your system configuration, you may also need to set the
LD_LIBRARY_PATH environment variable to
On systems with multiple GPUs, Brian2CUDA uses the first GPU with highest compute
capability as returned by
nvidia-smi. If you want to manually choose a GPU you can
do so via Brian2CUDA preference devices.cuda_standalone.cuda_backend.gpu_id.
You can limit the visibility of NVIDIA GPUs by setting the environment variable
CUDA_VISIBLE_DEVICES. This also limits the GPUs visible to Brian2CUDA. That means
Brian2CUDA’s devices.cuda_standalone.cuda_backend.gpu_id preference will index only
those GPUs that are visible. E.g. if you run a Brian2CUDA script with
prefs.devices.cuda_standalone.cuda_backend.gpu_id = 0 on a system with two GPUs
CUDA_VISIBLE_DEVICES=1 python your-brian2cuda-script.py, the simulation would
run on the second GPU (with ID
1, visible to Brian2CUDA as ID
On systems without GPU, Brian2CUDA will fail before code generation by default (since it tries to detect the compute capability of the available GPUs and the CUDA runtime version). If you want to compile your code on a system without GPUs, you can disable automatic GPU detection and manually set the compute capability and runtime version. To do so, set the following preferences:
prefs.devices.cuda_standalone.cuda_backend.detect_gpus = False prefs.devices.cuda_standalone.cuda_backend.compute_capability = <compute_capability> prefs.devices.cuda_standalone.cuda_backend.runtime_version = <runtime_version>
nvidia-smi to query the compute capability of GPUs during automatic GPU
selection. On older driver versions (
< 510.39.01, these are driver versions shipped
with CUDA toolkit
< 11.6), this was not supported. For those versions, we use the
deviceQuery tool from the CUDA samples, which is by default installed with the
CUDA Toolkit under
extras/demo_suite/deviceQuery in the CUDA installation directory.
For some custom CUDA installations, the CUDA samples are not included, in which case
Brian2CUDA’s GPU detection fails. In that case, you have three options. Do one of the
Update your NVIDIA driver
Download the CUDA samples to a folder of your choice and compile
git clone https://github.com/NVIDIA/cuda-samples.git cd cuda-samples/Samples/1_Utilities/deviceQuery make # Run deviceQuery to test it ./deviceQuery
Now set Brian2CUDA preference devices.cuda_standalone.cuda_backend.device_query_path to point to your
Disable automatic GPU detection and manually provide the GPU ID and compute capability (you can find the compute capability of your GPU on https://developer.nvidia.com/cuda-gpus):
prefs.devices.cuda_standalone.cuda_backend.detect_gpus = False prefs.devices.cuda_standalone.cuda_backend.compute_capability = <compute_capability>