We utilize containers to deploy Gromacs, ensuring portability across various systems and operating systems. This knowledge base (KB) is created to assist users in quickly setting up and using Gromacs on our HPC systems. For more comprehensive details on Apptainer, please visit the Apptainer documentation.
This guide also assumes familiarity with how to request GPU resources on the Discovery cluster. For additional information on requesting GPUs, refer to the Requesting GPUs on Discovery KB
Container Location on RC Systems: On RC systems, Apptainer containers, formerly known as Singularity containers, are stored at /optnfs/singularity
.
To see a list of what gromacs containers are available, you can run this command:
ls -ltr /optnfs/singularity | grep -i gromacs
Launching a Shell in the Container’s Environment:To access a container’s environment, simply execute the following command:
apptainer shell --nv --bind /dartfs-hpc/scratch /optnfs/singularity/<gromacs.sif> /bin/bash
This command launches a shell within the container where /scratch
is available as a mounted bind point. Once inside, you can execute mdrun_gpu
to begin your processes.
- apptainer -- calls the container program
- shell -- instructs the command to open a shell environment
- --nv -- enable GPU communication between the GPUs and Nvidia drivers
- --bind -- bind location of where you have data located for the run
- /optnfs/singularity/gromacs_2023_3.sif -- Name and location of the container file.
This is how it could look in practice on discovery, requesting 1 GPU from our gpuq partition.
[john@andes8 ~]$ srun --partition=gpuq --gres=gpu:1 --pty /bin/bash
[john@a02 ~]$ apptainer shell --nv --bind /dartfs-hpc/scratch /optnfs/singularity/<gromacs.sif> /bin/bash
WARNING: Parameters to shell command are ignored
Apptainer> gmx_gpu mdrun -s quench -o quench -e quench -c quench -v -pin on -gpu_id 0 -nb gpu -bonded gpu -ntomp 4 -pinoffset 0 -pinstride 1
Running the container in a slurm job requires run instead of shell. An example of how a gromacs command can be run at the command line may look like:
apptainer run --nv --bind /dartfs-hpc/scratch --bind /dartfs-hpc/rc/home/p/d18014p/test_jobs/testamp05 /optnfs/singularity/<gromacs.sif> gmx_gpu mdrun -s quench -o quench -e quench -c quench -v -pin on -gpu_id 0 -nb gpu -bonded gpu -ntomp 4 -pinoffset 0 -pinstride 1
Notice we are no longer passing shell, instead we are passing run and consequently /bin/bash has turned into the command we want to run with gromacs which is gmx_gpu mdrun -s quench -o quench -e quench -c quench -v -pin on -gpu_id 0 -nb gpu -bonded gpu -ntomp 4 -pinoffset 0 -pinstride 1.
When you are ready to submit a job to the scheduler the apptainer line is placed at the bottom of your slurm submit script. Similar to below:
#!/bin/bash
# Name of the job
#SBATCH --job-name=gromacs_gpu_job
# Number of cores, in this case one
#SBATCH --ntasks-per-node=1
# Request the GPU partition
#SBATCH --partition gpuq
# Request the GPU resources
#SBATCH --gres=gpu:1
# Walltime (job duration)
#SBATCH --time=5:00:00
# Email notifications
#SBATCH --mail-type=BEGIN,END,FAIL
apptainer run --nv --bind /dartfs-hpc/scratch --bind /dartfs-hpc/rc/home/p/d18014p/test_jobs/testamp05 /optnfs/singularity/<gromacs.sif> gmx_gpu mdrun -s quench -o quench -e quench -c quench -v -pin on -gpu_id 0 -nb gpu -bonded gpu -ntomp 4 -pinoffset 0 -pinstride 1