Cluster1 - 2004-2010

Cluster1 was our first large scale production Linux cluster. It was a ClusterVision installation commissioned in spring 2005.  Between April 2004 and March 2006, the project was funded as part of the university's HEFCE SRIF-2 (Research Infrastructure Funding) project.

The system comprised:

  • Master node which controlled management of the cluster
  • Two fileserver nodes connected to the UEA Storage Area Network.
  • Gigabit ethernet backbone, comprising 4 Foundary switches, used for file transfer.
  • Myrinet low-latency backbone was connected to 32 of the nodes highly optimised for parallel computing.

 

The Myrinet portion was upgraded to 64 nodes in summer 2008.

Specification

  • 104 AMD Opteron 246 (2.0Ghz) dual processor slave nodes for 64 bit applications 3Gb RAM per node
  • 64 AMD Opteron 2212 (2.0Ghz) dual-core dual processor nodes for 64 bit applications with Myrinet interconnect, 4 Gb RAM memory per node.
  • a small number of Intel Xeon (2.4Ghz) dual processor slave nodes for 32 bit applications (migrated from beo1)
  • Gigabit Ethernet backbone consisting of 4 Foundry FastIron Edge X448 48-port switches
  • 64 node Myrinet low-latency backbone for parallel/shared memory applications
  • 4 AMD Opteron 2212 (2.0Ghz) dual-core dual processor login nodes.
  • Single AMD Opteron 246 (2.0Ghz) dual processor master node used for slave image deployment, cluster-wide management and job scheduling
  • 2 AMD Opteron 246 (2.0Ghz) dual processor fileserver nodes connected to the UEA Storage Area Network 
  • Suse 9.1
  • Sun Grid Engine (SGE) v6.0u7 queueing system

 

Cluster1 was decommissioned in 2010.