There are two different hardware combinations which provide the parallel queues - please see http://rscs.uea.ac.uk/new-high-performance-computing-cluster/new-queues for details.  We recommend you use the mellanox-ib combinations, as that uses the newer hardware.

For reference - mellanox-ib should use the mellanox modules, short-ib and long-ib  should use the qlogic modules. Please update your scripts, and if appropriate, your .bashrc accordingly.

If there is no module for the combination you wish to use, please let us know.
 

The mellanox-ib have 28 cores, so you should set your ptile to 28 (span[ptile=28]),  short-ib and long-ib nodes have 24 cores, so you should use span[ptile=24].

 

Openmpi

There are versions of openmpi compiled for both qlogic and mellanox queues.  There are also versions compiled with intel and gcc.  We recommend using the intel version if you have no preference, because it should be better running on the intel chip technology present in the machines.

There are many options that can be used when running mpi - please see the manpage https://www.open-mpi.org/doc/current/man1/mpirun.1.php for mpirun options.

Example bsub script

#BSUB -q mellanox-ib
#BSUB -n 100
#BSUB -R 'span[ptile=28]'
#BSUB -R 'cu[maxcus=1]'
#BSUB -oo test-%J.out
#BSUB -eo test-%J.error
#BSUB -J "test"
. /etc/profile
module add gcc mpi/openmpi/2.1.0/gcc/mellanox
mpirun ~/julie/hello_parallel

 

Platform mpi

There are versions of platform mpi compiled for gcc, intel and pgi.  Please see the user guide.

Example bsub script

#BSUB -q mellanox-ib
#BSUB -n 100
#BSUB -R 'span[ptile=28]'
#BSUB -R 'cu[maxcus=1]'
#BSUB -oo test-%J.out
#BSUB -eo test-%J.error
#BSUB -J "test"
. /etc/profile
module add mpi/platform/intel
mpirun ~/julie/hello_parallel