Navigation

Spack

AMD Toolchain with SPACK

Micro Benchmarks/Synthetic

SPACK HPC Applications

Introduction

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code.  LAMMPS can be used to simulate solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems.  LAMMPS can be used to model atoms, or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.  LAMMPS runs on single processors or in parallel using message-passing techniques with a spatial-decomposition of the simulation domain.  The code is designed to be easy to modify or extend with new functionality.

LAMMPS official website:  http://lammps.sandia.gov

Bulid LAMMPS using Spack

Reference to add external packages to Spack: Build Customization (Adding external packages to Spack)

# Format For Building LAMMPS with AOCC
$ spack -d install -v lammps@<Version Number> %aocc@<Version Number> target=<zen2/zen3> ~kim cflags="CFLAGS" +asphere +class2 +kspace +manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp +openmp  ^amdfftw@<Version Number> target=<zen2/zen3> ^openmpi@<Version Number>
# Example: For building LAMMPS-20200721 with AOCC-3.1 AOCL-3.0 and OpenMPI-4.0.5
$ spack -d install -v lammps@20200721 %aocc@3.1.0 target=zen3 ~kim cflags="-Ofast -mfma -fvectorize -funroll-loops" +asphere +class2 +kspace +manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp +openmp  ^amdfftw@3.0 target=zen3 ^openmpi@4.0.5
# Example: For building LAMMPS-20200721 with AOCC-3.0 AOCL-3.0 and OpenMPI-4.0.3
$ spack -d install -v lammps@20200721 %aocc@3.0.0 target=zen3 ~kim cflags="-Ofast -mfma -fvectorize -funroll-loops" +asphere +class2 +kspace +manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp +openmp  ^amdfftw@3.0 target=zen3 ^openmpi@4.0.3
# Example: For building LAMMPS-20200721 with AOCC-2.3.0 AOCL-2.2 and OpenMPI-4.0.3
$ spack -d install -v lammps@20200721 %aocc@2.3.0 target=zen2 ~kim cflags="-Ofast -mfma -fvectorize -funroll-loops" +asphere +class2 +kspace +manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp +openmp  ^amdfftw@2.2 target=zen2 ^openmpi@4.0.3
# Example: For building LAMMPS 20200721 with AOCC-2.2.0 AOCL-2.2 and OpenMPI-4.0.3
$ spack -d install -v lammps@20200721 %aocc@2.2.0 target=zen2 ~kim cflags="-Ofast -mfma -fvectorize -funroll-loops" +asphere +class2 +kspace +manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp +openmp  ^amdfftw@2.2 target=zen2 ^openmpi@4.0.3

Please use any combination of below components/Applications and its versions.

Component/Application Versions Applicable
LAMMPS 20200721
AOCC 3.1.0, 3.0.0, 2.3.0, 2.2.0
AOCL 3.0, 2.2

Specifications and Dependencies

Symbol Meaning
-d To enable debug output
-v To enable verbose
@ To specify version number
% To specify compiler
target=zen2 / target=zen3 To enable target zen2 or zen3 architecture
cflags To add cflags to the Spack env using command line
-Ofast:  Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards.
-mfma : Performing the calculation with a single rounding step, rather than multiplying and then adding with two rounding, can result in a better degree of accuracy
-fvectorise: Enables the generation of Advanced SIMD and MVE vector instructions directly from C or C++ code at optimization levels -O1 and higher.
-funroll-loops: Enables the generation of Advanced SIMD and MVE vector instructions directly from C or C++ code at optimization levels -O1 and higher.
+asphere ,+class2, +kspace,+manybody +molecule +mpiio +opt +replica +rigid +granular +user-omp Lammps specific packages  (Please add it as per user requirement)
+openmp Build with openmp variant
^amdfftw target=zen2/zen3 Build with amdfftw with target=zen2 or target=zen3
^openmpi@4.0.3 Build with Open MPI 4.0.3

Running LAMMPS

While LAMMPS can be used for a big variety of workloads. Here we have added the steps to download and run the sample data sets available in LAMMPS directory.

Note: By default LAMMPS comes with 5 sample data sets in its tar package which is available in <LAMMPSROOT>/bench directory.

Examples for running LAMMPS on AMD 2nd and 3rd Gen EPYC processors are presented below.

Running LAMMPS on AMD 2nd Gen EPYC Processors

The following example steps are for running LAMMPS with Rhodo dataset on AMD EPYC 7742 Series Processor with SMT ON and with the “USER-OMP” package.
Setting Environmennt
# Format for loading LAMMPS build with AOCC
$ spack load lammps@<Version Number> %aocc@<Version Number>
# Example : Load LAMMPS-20200721 build with AOCC-2.2.0 module into environment
$ spack load lammps@20200721 %aocc@2.2.0

 

Obtaining Benchmarks
# Download the Lammps source file which has the input datasets as part of it.
$ wget https://lammps.sandia.gov/tars/lammps-21Jul20.tar.gz
tar -xvf lammps-21Jul20.tar.gz
cd lammps-21Jul20/bench/

Run command

Use proper binding with mpirun command. Below command and binding work with “SMT ON” for AMD EPYC 7742 Series Processors.

RHODO: Single Node
export LMP_MPI=/path_to_lammps_installed_folder/bin/lmp
$ mpirun -np 256 --map-by hwthread -use-hwthread-cpus -mca btl vader,self $LMP_MPI -var r 1000 -in in.rhodo -sf omp

Running LAMMPS on AMD 3rd Gen EPYC Processors

Following steps and files are for two AMD EPYC 7763 Series Processors with 512 GB of Memory with SMT OFF and with the “USER-OMP” package.
Setting Environmennt
# Format for loading LAMMPS build with AOCC
$ spack load lammps@ %aocc@
# Example : Load lammps build with AOCC-3.1.0 module into environment
$ spack load lammps@20200721 %aocc@3.1.0

 

Obtaining Benchmarks
# Download the Lammps source file which has the input datasets as part of it.
$ wget https://lammps.sandia.gov/tars/lammps-21Jul20.tar.gz
tar -xvf lammps-21Jul20.tar.gz
cd lammps-21Jul20/bench/

Run command

Use proper binding with mpirun command. Below command and binding work with “SMT ON” for AMD EPYC 7763 Series Processors.

RHODO: Single Node
export LMP_MPI=/path_to_lammps_installed_folder/bin/lmp
$ mpirun -np 256 --map-by hwthread -use-hwthread-cpus -mca btl vader,self $LMP_MPI -var r 1000 -in in.rhodo -sf omp