GPU programming model - Compute Unified Device Architecture (CUDA)
Nvidia have provided the CUDA parallel computing architecture as an interface to their GPU cards. A quick introduction to GPUs and GPU programming can be found here. Also, here to view a CUDA tutorial.
To load the CUDA environment, type:
module load cuda/8.0.61
Nvidia have provided the nvcc compiler for CUDA programs.
Currently have a two different types of GPU node
g000(1-8) each have 1 Tesla K40m cards - for this reason you need to ask for exclusive access (-x)
g000(9-10) each have 2 Tesla V100-PCIE-16GB cards - not yet open to all users
Submitting GPU jobs with Platform LSF
All GPU jobs must be submitted to the gpu LSF queues
To connect to the GPU for an interactive session use :
Xinteractive -x -q gpu
An example of an LSF submission script for a GPU job:
#BSUB -q gpu
#BSUB -n 1
#BSUB -oo gpu-%J.log
#BSUB -eo gpu-%J.log
#BSUB -J "gpu"
#BSUB -cwd "$HOME/gpu"
module load cuda/8
The above script requests a single CPU core (#BSUB -n 1) which can be changed to a value less than or equal to 16, which is the maximum number of available CPU cores on the node.
|cusparse||cuda/8.0.61||CUDA Sparse Solvers|
|curand||cuda/8.0.61||CUDA Random Number Generator|
|npp||cuda/8.0.61||CUDA Performance Primitives|
To find the library location type the command below (Observe the LD_LIBRARY_PATH)
module show cuda/8.0.61
During the link stage, use one of the following commands:
/path/to/library/libname.so or /path/to/library/libname.a