Skip to Content

College of Arts & Sciences
Computing and Information Technology


Pople

Department of Chemistry & Biochemistry

System Overview

The Pople Linux Cluster consists of 18 compute nodes with two 6-Core processors, 24GB of memory and 600GB disk per node, totally 216 cores, and two storage nodes. It is configured with 432 GB of total memory and 11TB of shared disk space. The theoretical peak compute performance is 4.9 TFLOPS. The system supports an 11 TB global, NFS file system. Nodes are interconnected with InfiniBand technology in a fat-tree topology with a 40Gbit/sec point-to-point bandwidth.

All Pople nodes run Linux Centos 6.8 OS and support batch services through SGE 6.2. Home directories are serviced by an NSF file system with global access. Other than the $HOME directory file system, all inter-node communication (MPI) is through a Mellanox InfiniBand network. The configuration and features for the compute nodes, interconnect and I/O systems are described below, and summarized.

System NamePople
Host Namepople.psc.sc.edu
Number of Compute Nodes18
Operating SystemLinux
Number of Processors216 (compute)
CPU TypeHex-core Xeon 5660 processors (2.8 GHz)
Total Memory432 GB
Peak Performance4.9 TFLOPS
Total Disk Space11TB (shared)
Primary InterconnectQDR Infiniband @ 40 GB/S

Scientific Application Software:

  • ADF/2010.02
  • ADF/2012.01
  • ADF/2013.01(default)
  • AMBER/11(default)
  • AMBERTOOLS/13(default)
  • AUTODOCK/4.2.5.1
  • AUTODOCKVINA/1.1.2_x86
  • GROMACS/4.5.5(default)
  • GROMACS/4.6
  • INTEL Compiler/11.1-059
  • INTEL Compiler/11.1-072
  • INTEL Compiler/12.0.4
  • INTEL Compiler/12.1.1
  • INTEL Compiler/12.1.3(default)
  • INTELMPI/4.0
  • INTELMPI/4.1(default)
  • LAMMPS/7.2(default)
  • MPICH2/3.0.2(default)
  • NAMD/2.9(default)
  • OPENMPI/143-intel(default)
  • OPENMPI/161-gcc
  • OPENMPI/161-intel
  • PGI/12.1(default)
  • Scienomics/3.4
  • SPARTAN/2010
  • QCHEM/4.0
  • VMD/1.9(default)

Pople User Guide