Skip to Content

College of Arts & Sciences
Computing and Information Technology


Boltzmann Cluster

System Overview

The Boltzmann Linux Cluster consists of 13 GPU compute nodes, including 2 HP SL390s nodes with 48 cores, 4 HP SL250s compute nodes, 4 Dell PowerEdge R730 compute nodes, 1 Dell Precision T7600 and 2 storage nodes. It is configured with 832 GB of total memory and 80TB of shared disk space. The system supports an 80 TB global NFS file system. Nodes are interconnected with InfiniBand technology in a fat-tree topology with a 40Gbit/sec point-to-point bandwidth and 10Gbit/sec SFP fast eithernet.

All Boltzmann nodes run Linux Centos 6.8 OS and support batch services through SGE. Global data storage are serviced by an NSF file system as home directories. Other than the $HOME directory file system, all inter-node communication (MPI) is through a Mellanox InfiniBand network. The configuration and features for the compute nodes, interconnect and I/O systems are described below, and summarized.

System NameBoltzmann
Host Nameboltzmann.psc.sc.edu
Operating SystemCentOS 6.8
Number of Processors312 (compute)
CPU TypeHex-core Xeon 5660 processors (2.8 GHz)
Total Memory832 GB
Peak Performance4.9 TFLOPS *
Total Disk Space80TB (shared)
Primary InterconnectQDR Infiniband @ 40 Gbit/S, SFP 10Gbit/S