Dell High Performance Computing (HPC) Cluster Description

[, latest additions as of 9/25/13 in bold]

Hardware & Networking Architecture
  • (16x) Dell PowerEdge r620 Servers
  • 16x) Dell PowerEdge 1950 Servers
  • (13x) Dell PowerEdge r410 Servers
  • Dell PowerEdge r620 Head Node
  • Dell PowerEdge r620 Login Node
  • Dell PowerEdge r820 Deep Memory Node
  • Dell PowerEdge 1950 IO Server
Peak FLOPs 3 TF
Number of Servers
Number of CPUs (cores)
  • Dell PowerEdge r620: Intel Xeon E5-2660 2.20 Ghz dual-socket 8-core
    (Sandybridge, 8.0 GT/s QPI, 20M L2)
  • Dell PowerEdge 1950: Intel Xeon E5440 2.83 Ghz dual-socket quad-core
    (Harpertown, 1333 Mhz FSB, 12M L2)
  • Dell PowerEdge r410: Intel Xeon E5520 2.26 Ghz dual-socket quad-core
    (Nehalem, QPI, 4x256K L2, 8M L3)
  • Dell PowerEdge r820: Intel Xeon E5-4620 2.20 Ghz quad-socket 8-core (32 total cores)
    (Sandybridge-E, 7.2 GT/s QPI, 16M L2)
  • Hyperthreading disabled
  • Per node (PowerEdge r620): 64 GB
  • Per node (PowerEdge 1950): 32 GB
  • Per node (PowerEdge r410): 24 GB
  • Deep Memory Node (PowerEdge r820): 768 GB
Network Interconnect QLogic InfiniBand FDR (56 Gb/s), 1 Gbps Ethernet Admin Network
Parallel Filesystem 160 TB High-Speed GPFS via FDR IB (DDN SFA7700)
Software Operating System CentOS 6.1 (Bright Cluster Manager)
  • Intel: Fortran77/90/95 C C++ MPICC
Resource Manager/Scheduler Torque (4.2.2)/Moab (7.2.0)
Cluster Management Bright Cluster Manager
Grid Software Provided via XSEDE
Policies/User Limits Home directory disk quota None currently
Charging policy  


Shared File System

hpc01's exports share /home, /scratch, and /genomics directories on the entire cluster.


/home file share is currently 7 TB.  Your $HOME variable should be set when you login.  Use this directory to build your scripts and executables and store source code.  All nodes can access this directory.


/scratch file share is currently 1 TB.  Torque/Moab uses /scratch as secondary shared storage for temporary file sharing.


/genomics file share is currently about 5.4 TB.  The purpose of this file share is to store large genomics and bioinformatics datasets.  All nodes have this directory mounted.  Please contact us if you need specific security setup on your genomics data at [email protected].


Local File System

hpc01 compute nodes are PXE booted and stores their image locally.  Each node has local temporary disk storage in /tmp and local fast diskless shared storage in /dev/shm.


/tmp can be used for local temporary storage for jobs.  This may be purged at any time so don't rely on it as persistent, but will be persistent for as long as jobs are running that are using it.


/dev/shm is another option for fast local scratch space.  It is a typical Linux ramdisk and has a 16 GB quota.  Only use this explicitely if you have multiple processes or programs communicating with each other as it is diskless and essentially uses RAM as a disk.  It can greatly increase performance in certain situations but should be used at your own risk.  Data stored here is not persistent beyond running jobs.