AMD Toolchain with SPACK

Micro Benchmarks/Synthetic

SPACK HPC Applications


NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulation.  NAMD was the first application able to perform a full all-atom simulation of a virus in 2006, and in 2012 a molecular dynamics flexible fitting interaction of an HIV virus capsid in its tabular form.

NAMD official website:

Getting NAMD Source Files

Spack does not currently support automatically downloading NAMD source tar files. Please download the source tar files for NAMD 2.12, 2.13 or 2.14 manually from the links recommended below and store them in the Spack parent directory spack/.

NAMD source tars
$ wget --no-check-certificate
$ wget --no-check-certificate
$ wget --no-check-certificate

Build NAMD using Spack

Reference to add external packages to Spack: Build Customization (Adding external packages to Spack)

# Format For Building NAMD
$ spack -d install -v namd@<Version Number> %aocc@<Version Number> target=<zen2/zen3> fftw=amdfftw interface=tcl ^amdfftw@<Version Number> ^charmpp backend=mpi build-target=charm++ ^openmpi@<Version Number> fabrics=auto ^hwloc~pci
# Example: For Building NAMD 2.14 with AOCC-3.0 and AOCL-3.0
$ spack -d install -v namd@2.14 %aocc@3.0.0 target=zen3 fftw=amdfftw interface=tcl ^amdfftw@3.0 ^charmpp backend=mpi build-target=charm++ ^openmpi@4.0.3 fabrics=auto ^hwloc~pci
# Example: For Building NAMD 2.14 with AOCC-2.3 and AOCL-2.2
$ spack -d install -v namd@2.14 %aocc@2.3.0 target=zen2 fftw=amdfftw interface=tcl ^amdfftw@2.2 ^charmpp backend=mpi build-target=charm++ ^openmpi@4.0.3 fabrics=auto ^hwloc~pci
# Example: For Building NAMD 2.14 with AOCC-2.2 and AOCL-2.2
$ spack -d install -v namd@2.14 %aocc@2.2.0 target=zen2 fftw=amdfftw interface=tcl ^amdfftw@2.2 ^charmpp backend=mpi build-target=charm++ ^openmpi@4.0.3 fabrics=auto ^hwloc~pci

Please use any combination of below components/Applications and its versions.

Component/Application Versions Applicable
NAMD 2.12, 2.13, 2.14
AOCC 3.0.0, 2.3.0, 2.2.0
AOCL 3.0, 2.2

Specifications and Dependencies

Symbol Meaning
-d To enable debug output
-v To enable verbose
@ To specify version number
% To specify compiler
fftw=amdfftw Use amdfftw for build, by default fftw=3 (i.e.:­FFT­W3.3.8)
inter­fac­e=tcl Use tcl interface for build, by default  inter­fac­e=none
^charmpp To build with charmpp dependency
backe­nd=mpi To build mpi variant of charmpp, by default  backe­nd=netlrts
build­-ta­rge­t=c­harm++ To select charm++ variant, by default build­-ta­rge­t=LIBS
^open­mpi­ Use openm­pi@4.0.3 for build
fabri­cs=­auto Use fabri­cs=­auto variant for openmpi, by default fabri­cs=­none
^hwlo­c~pci Use hwloc without pci support for openmpi build

Running NAMD

NAMD performance is measured by the simulation rate in nanoseconds per day (ns/day).  Higher is better.  In this example, the workloads chosen were the well-known STMV and APOA1 benchmarks that can be found at:

Setting Environment
# Format for loading NAMD build with AOCC
$ spack load namd@<Version Number> %aocc@<Version Number>
# Example: Load NAMD build with AOCC-3.0 module into environment
$ spack load namd@2.14 %aocc@3.0.0


Obtaining Benchmarks
$ wget --no-check-certificate
$ wget --no-check-certificate
$ wget --no-check-certificate
$ wget --no-check-certificate
$ gunzip stmv.psf.gz
$ gunzip stmv.pdb.gz
$ wget  --no-check-certificate
tar -xzvf apoa1.tar.gz
sed -i 's/\/usr/../g' apoa1/apoa1.namd
mkdir tmp

Example run commands for SMT-enabled platforms:

  • SMT enabled on two-socket AMD EPYC 7742 or 7763 processors will have 256 cores in total. The enumeration is from 0-255
  • NAMD needs one communication thread per set of worker threads, running with four communication threads and each one has 63 worker threads
  • Lay out the communication threads on the cores with a stride of 64 so they get pinned to cores 0, 64, 128, 192. This ensures that the communication threads are pinned on two pairs of cores and efficiently associated with the worker cores
  • The worker threads are then pinned on cores 1-63, 65-127, 129-191, 193-255

Running STMV

STMV: Single Node
# stmv dataset
$ mpirun -np 4 --bind-to core namd2 +ppn 63 +commap 0-255:64 +pemap 1-255:64.63 stmv.namd

Running APOA1

APOA1: Single Node
# apoa1 dataset
$ mpirun -np 4 --bind-to core namd2 +ppn 63 +commap 0-255:64 +pemap 1-255:64.63 apoa1/apoa1.namd

NS/day Timings

The NS/day timings can be obtained using the script ( as follows:

# collect NAMD mpirun output and pass it to script
$ python2 <NAMD_run_output_file>
# output will be similar to
# NS per day:    13.2062
# Mean time per step:     0.00691742
# Standard deviation:     0.00418039