The queues on HPC are as follows:

Queue Name Slots Available Wall Time RAM Per Core Priority Description
short-eth 688 24 hours 4GB  30 Ethernet
short-ib 600 24 hours 5GB 30 Parallel 
gpu 8 24 hours 64GB 30 GPU
interactive 80 12 hours 4GB 30 Interactive
long-eth 1600 168 hours 4GB 20 Ethernet
sky-eth 384 168 hours 4GB 20 Ethernet
long-ib 660 168 hours 4GB 20 Parallel
mellanox-ib 924 168 hours 4GB 20 Parallel
sky-ib 672 168 hours 4GB 20 Parallel
large-mem 32 168 hours 32GB 20 Large memory
huge-memory 48 168 hours 32GB 20 Huge memory
ht-20 620 168 hours 3GB 20 High throughput
ht-16 784 168 hours 4GB 20 High throughput


Please note that with the ongoing upgrade of the cluster these queues are likely to change, for a current view of the queues on the cluster run the command: bqueues.

There are three different hardware combinations which provide the parallel queues

short-ib and long-ib have qlogic technology

40 x Haswell Infiniband Nodes
Model: Fujitsu CX2250
Cores: 24
Memory: 128Gb
CPU: Haswell E5-2670v3 (12/C24T 2.30 GHZ)

mellanox-ib has mellanox technology

36 x Broadwell Infiniband Nodes
Model: Fujitsu CX2250
Cores: 28
Memory: 128Gb
CPU: Broadwell E5-2680 v4 @ 2.40GHz

sky-ib has mellanox technology

28 x Skylake Infiniband Nodes
Cores: 24
Memory: 98Gb
CPU: Silver 4116 CPU @ 2.10GHz

We recommend using the mellanox-ib queue, because it is the newer technology, and software will be compiled for this hardware in the first instance.  If you need access to the qlogic machines, please check with us to make sure the relevant software modules are available.