Dell High Performance Computing (HPC) Cluster Description
[hpc01.mkei.org, latest additions as of 9/25/13 in bold]
|Hardware & Networking
| Number of Servers
Number of CPUs (cores)
|QLogic InfiniBand FDR (56 Gb/s), 1 Gbps Ethernet Admin Network
|160 TB High-Speed GPFS via FDR IB (DDN SFA7700)
|CentOS 6.1 (Bright Cluster Manager)
|Torque (4.2.2)/Moab (7.2.0)
|Bright Cluster Manager
|Provided via XSEDE
|Home directory disk quota
hpc01's exports share /home, /scratch, and /genomics directories on the entire cluster.
/home file share is currently 7 TB. Your $HOME variable should be set when you login. Use this directory to build your scripts and executables and store source code. All nodes can access this directory.
/scratch file share is currently 1 TB. Torque/Moab uses /scratch as secondary shared storage for temporary file sharing.
/genomics file share is currently about 5.4 TB. The purpose of this file share is to store large genomics and bioinformatics datasets. All nodes have this directory mounted. Please contact us if you need specific security setup on your genomics data at [email protected].
Local File System
hpc01 compute nodes are PXE booted and stores their image locally. Each node has local temporary disk storage in /tmp and local fast diskless shared storage in /dev/shm.
/tmp can be used for local temporary storage for jobs. This may be purged at any time so don't rely on it as persistent, but will be persistent for as long as jobs are running that are using it.
/dev/shm is another option for fast local scratch space. It is a typical Linux ramdisk and has a 16 GB quota. Only use this explicitely if you have multiple processes or programs communicating with each other as it is diskless and essentially uses RAM as a disk. It can greatly increase performance in certain situations but should be used at your own risk. Data stored here is not persistent beyond running jobs.