Enable Your Research with High Performance Computing at UEA

 

What is High Performance Computing?

High Performance Computing is often abbreviated to HPC.  HPC provides access to a variety of hardware that can support intensive computation, including high memory, GPU, parallel and standard compute nodes.  The cluster hardware work together as a team.  Users are able to run many concurrent jobs enabling faster work throughput.

From a cluster user’s perspective, in basic terms the cluster consists of;

  • A login node (server) which is the gateway into the cluster.
  • The Linux Operating System which is installed on all nodes in the cluster.
  • A scheduler (queueing system) that takes user’s jobs and submits them to the best-fit node in the requested queue taking into account the job’s requirements.
  • Pools of compute nodes (specialist computer hardware) which provide lots of CPU and RAM for cluster users to use, and which are presented via queues to which the user submits jobs via the queueing system.
  • Application software which cluster users can run.
  • Storage which is available throughout the cluster, giving each  cluster user a home directory (250GB, backed up) and a scratch directory (500GB, not backed up) in which to store data (raw data and output).

Why use High Performance Computing?

What are the advantages of using the HPC Cluster over my desktop or laptop?

  • Your device's resources aren’t tied up (and it doesn’t need to stay powered on) for the duration of your work.
  • There are more resources (CPU cores, memory (RAM), GPU, and storage)  available on the cluster.
  • Potential for multi-threading or running lots of jobs concurrently on different nodes.
  • Users have a default allowance of 100 concurrent jobs.
  • Jobs are queued until resources are available, so you can submit lots of jobs, and will be notified once they have completed.
  • The cluster contains specialised queues consisting of specialised hardware/nodes.  As well as compute nodes (standard network connection), there is also GPU (for work using the graphical processing unit), Infiniband (for work using a faster network connection, used for parallel jobs spanning more than one node), large memory nodes for memory intensive work.
  • There is a large range of software available on the cluster. If anyone has used it before it will be available
  • Certain software has licensing implications (eg intel compiler) which you don’t need to worry about if it’s installed centrally
  • The HPC team provides assistance with computationally challenging workflows in all areas of science and big data at UEA.

We currently have two linux clusters

  • HPC - which is due for retirement in 2020
  • ADA - which is our new cluster due to come online in 2020.