Skip to Content

College of Arts & Sciences
Computing and Information Technology


Pople

Department of Chemistry & Biochemistry

System Overview

The Pople Linux Cluster consists of 20 compute nodes with two 6-Core processors, 24GB of memory and 146GB disk per node, totally 204 cores, and 1 8TB storage node. It is configured with 408 GB of total memory and 8TB of shared disk space. The theoretical peak compute performance is 4.9 TFLOPS. The system supports an 8 TB global, NFS file system. Nodes are interconnected with InfiniBand technology in a fat-tree topology with a 40Gbit/sec point-to-point bandwidth.

All Pople nodes run Linux Centos 6.8 OS and support batch services through SGE 6.2. Home directories are serviced by an NSF file system with global access. Other than the $HOME directory file system, all inter-node communication (MPI) is through a Mellanox InfiniBand network. The configuration and features for the compute nodes, interconnect and I/O systems are described below, and summarized.

System NamePople
Host Namepople.psc.sc.edu
Number of Compute Nodes20
Operating SystemLinux
Number of Processors204 (compute)
CPU TypeHex-core Xeon 5660 processors (2.8 GHz)
Total Memory408 GB
Peak Performance4.9 TFLOPS
Total Disk Space8TB (shared)
Primary InterconnectQDR Infiniband @ 40 GB/S

Scientific Application Software:

  • ADF/2010.02
  • ADF/2012.01
  • ADF/2013.01(default)
  • AMBER/11(default)
  • AMBERTOOLS/13(default)
  • AUTODOCK/4.2.5.1
  • AUTODOCKVINA/1.1.2_x86
  • GROMACS/4.5.5(default)
  • GROMACS/4.6
  • INTEL Compiler/11.1-059
  • INTEL Compiler/11.1-072
  • INTEL Compiler/12.0.4
  • INTEL Compiler/12.1.1
  • INTEL Compiler/12.1.3(default)
  • INTELMPI/4.0
  • INTELMPI/4.1(default)
  • LAMMPS/7.2(default)
  • MPICH2/3.0.2(default)
  • NAMD/2.9(default)
  • OPENMPI/143-intel(default)
  • OPENMPI/161-gcc
  • OPENMPI/161-intel
  • PGI/12.1(default)
  • Scienomics/3.4
  • SPARTAN/2010
  • QCHEM
  • VMD/1.9(default)

Pople User Guide