Job array example

JOB ARRAYS

Job arrays are multiple jobs to be executed with identical or related parameters. Job arrays are submitted with -a <indices> or --array=<indices>. The indices specification identifies what array index values should be used. Multiple values may be specified using a comma separated list and/or a range of values with a “-” separator: --array=0-15 or --array=0,6,16-32.

A step function can also be specified with a suffix containing a colon and number. For example,--array=0-15:4 is equivalent to --array=0,4,8,12.
A  maximum  number  of  simultaneously running tasks from the job array may be specified using a “%” separator. For example --array=0-15%4 will limit the number of simultaneously running tasks from this job array to 4. The minimum index value is 0. The maximum value is 499999.

To receive mail alerts for each individual array task, --mail-type=ARRAY_TASKS should be added to the Slurm job script. Unless this option is specified, mail notifications on job BEGIN, END and FAIL apply to a job array as a whole rather than generating individual email messages for each task in the job array.

Below is an example submit script for submitting job arrays.

#!/bin/bash -l
# sbatch stops parsing at the first line which isn't a comment or whitespace
# SBATCH directives must be at the start of the line -- no indentation

# Name of the job
#SBATCH --job-name=sample_array_job

# Number of cores
#SBATCH --ntasks-per-node=1

# Array jobs.  This example will create 25 jobs, but only allow at most 4 to run concurrently
#SBATCH --array=1-25%4

# Walltime (job duration)
#SBATCH --time=00:15:00

# Email notifications
#SBATCH --mail-type=BEGIN,END,FAIL

# Your commands go here.  Each of the jobs is identical apart from environment variable
# $SLURM_ARRAY_TASK_ID, which will take values in the range 1-25
# They are all independent, and may run on different nodes at different times.
# The $SLURM_ARRAY_TASK_ID variable can be used to construct parameters to programs, select data files etc.
#
# The default output files will contain both the Job ID and the array task ID, and so will be distinct.  If setting
# custom output files, you must be sure that array tasks don't all overwrite the same files.

echo "My SLURM_ARRAY_TASK_ID: " $SLURM_ARRAY_TASK_ID

sleep 300
hostname -s

Each job in the array will be allocated its own resources, possibly on different nodes.  The variable $SLURM_ARRAY_TASK_ID will be different for each task, with values (in this example) 1-25, and can be used to construct arguments to programs to be run as part of the job.  One way to use such an array is to create a file with 25 sets of arguments in it, then use $SLURM_ARRAY_TASK_ID as an index into the file.  The $(sed ...) construct returns a single line from the file.

e.g.  

arguments=/path/to/file/with/program/arguments  # 25-line file
myprogram $(sed -n -e "${SLURM_ARRAY_TASK_ID}p" $arguments)