Skip to content

Example Slurm Job Scripts

This page shows some example jobs scripts for various types of jobs - both serial and parallel.

Note that job scripts call a "prologue" and "epilogue" scripts which simply perform some housekeeping and inserts useful information into the slurm-JOBID.out file.

Tip

Copies of the job scripts below can be found on ARCHIE at /software/job-scripts

Serial (single core) Job Examples

Generic Serial job

/opt/software/job-scripts/generic-slurm-serial.sh

 #!/bin/bash

 # Propogate environment variables to the compute node
 #SBATCH --export=ALL
 #
 # Run in the standard partition (queue)
 #SBATCH --partition=standard
 #
 # Specify project account
 #SBATCH --account=testing
 #
 # No. of tasks required (1 for a serial job)
 #SBATCH --ntasks=1
 #
 # Specify (hard) runtime (HH:MM:SS)
 #SBATCH --time=00:20:00
 #
 # Job name
 #SBATCH --job-name=serial_test

 #=========================================================
 # Prologue script to record job details
 #=========================================================
 /opt/software/scripts/job_prologue.sh 
 #----------------------------------------------------------

 ./myprogram

 #=========================================================
 # Epilogue script to record job endtime and runtime
 #=========================================================
 /opt/software/scripts/job_epilogue.sh 
 #----------------------------------------------------------

Matlab

/opt/software/job-scripts/matlab-slurm-serial.sh

 #!/bin/bash

 #======================================================
 #
 # Job script for running matlab on a single core (serial)
 #
 #======================================================

 #======================================================
 # Propogate environment variables to the compute node
 #SBATCH --export=ALL
 #
 # Run in the standard partition (queue)
 #SBATCH --partition=standard
 #
 # Specify project account
 #SBATCH --account=testing
 #
 # No. of tasks required (max. of 40)
 #SBATCH --ntasks=1
 #
 # Specify (hard) runtime (HH:MM:SS)
 #SBATCH --time=00:10:00
 #
 # Job name
 #SBATCH --job-name=matlab_test
 #
 # Output file
 #SBATCH --output=slurm-%j.out
 #======================================================

 module purge
 module load matlab/R2018a

 #======================================================
 # Prologue script to record job details
 #======================================================
 /opt/software/scripts/job_prologue.sh  
 #------------------------------------------------------

 # necessary for housekeeping of parallel matlab jobs
 export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs

 matlab -nodisplay -nodesktop -singleCompThread -r "parallel_blackjack;exit"

 # clean-up
 rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID

 #======================================================
 # Epilogue script to record job endtime and runtime
 #======================================================
 /opt/software/scripts/job_epilogue.sh 
 #------------------------------------------------------

-singleCompThread

If the -singleCompThread option is not supplied, Matlab will run in parallel and use all available CPU cores.

Parallel Job Examples

Generic single node (exclusive) parallel mpi job

/opt/software/job-scripts/generic-slurm-singlemode-exclusive.sh

 #!/bin/bash

 #======================================================
 #
 # Job script for running a parallel job on a single node
 #
 #======================================================

 #======================================================
 # Propogate environment variables to the compute node
 #SBATCH --export=ALL
 #
 # Run in the standard partition (queue)
 #SBATCH --partition=standard
 #
 # Specify project account
 #SBATCH --account=testing
 #
 # No. of tasks required (max. of 40)
 #SBATCH --ntasks=40
 #
 # Ensure the node is not shared with another job
 #SBATCH --exclusive
 #
 # Specify (hard) runtime (HH:MM:SS)
 #SBATCH --time=01:00:00
 #
 # Job name
 #SBATCH --job-name=openfoam_test
 #
 # Output file
 #SBATCH --output=slurm-%j.out
 #======================================================

 module purge

 # choose which version to load (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
 module load foss/2018a

 #=========================================================
 # Prologue script to record job details
 #=========================================================
 /opt/software/scripts/job_prologue.sh 
 #----------------------------------------------------------

 # Modify the line below to run your program
 mpirun -np $SLURM_NPROCS myprogram.exe

 #=========================================================
 # Epilogue script to record job endtime and runtime
 #=========================================================
 /opt/software/scripts/job_epilogue.sh 
 #----------------------------------------------------------

Matlab Parallel (single node)

/opt/software/job-scripts/matlab-slurm-singlenode-exclusive.sh

#!/bin/bash

#======================================================
#
# Job script for running matlab in parallel on a single node
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 40)
#SBATCH --ntasks=40
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:10:00
#
# Job name
#SBATCH --job-name=matlab_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load matlab/R2018a

#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh  
#------------------------------------------------------

# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs

matlab -nodisplay -nodesktop  -r "parallel_blackjack;exit"

# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh 
#------------------------------------------------------

Info

Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).

NAMD

/opt/software/job-scripts/namd-slurm-multinode-exclusive.sh

#!/bin/bash

#======================================================
#
# Job script for running on multiple nodes
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 160)
#SBATCH --ntasks=80
# This job will run on 2 nodes
#
# Distribute processes in round-robin fashion for load balancing
#SBATCH --distribution=cyclic
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=namd_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load namd/intel-2016.4/2.12_mpi

#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh  
#------------------------------------------------------

mpirun -np $SLURM_NTASKS namd2 stmv.conf

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh 
#------------------------------------------------------

OpenFOAM

/opt/software/job-scripts/openfoam-slurm-singlenode-exclusive.sh

#!/bin/bash

#======================================================
#
# Job script for running OpenFOAM on a single node
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 40)
#SBATCH --ntasks=40
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=openfoam_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge

# choose which version to load 
#module load openfoam/gcc-4.8.5/extend-3.2
#module load openfoam/gcc-4.8.5/extend-4.0
module load openfoam/intel-2018.2/v1712

#=========================================================
# Prologue script to record job details
#=========================================================
/opt/software/scripts/job_prologue.sh 
#----------------------------------------------------------

mpirun -np $SLURM_NPROCS dsmcFoam -parallel

#=========================================================
# Epilogue script to record job endtime and runtime
#=========================================================
/opt/software/scripts/job_epilogue.sh 
#----------------------------------------------------------

Starccm+

/opt/software/job-scripts/starccm-slurm-multinode-exclusive.sh

#!/bin/bash

#=================================================================================
#
# Job script for running StarCCM+ on a multiple nodes
#
#=================================================================================

# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required
#SBATCH --ntasks=160
#
# Ensure the node is not shared with another job
##SBATCH --exclusive
#
# Specify runtime (hard) (HH:MM:SS)
#SBATCH --time=04:00:00
#
# Job name
#SBATCH --job-name=starccm_bench
#
# Output file
#SBATCH --output=slurm-%j.out

module purge 
module load star-ccm/13.02.013-r8

#=========================================================
# Prologue script to record job details
#=========================================================
/opt/software/scripts/job_prologue.sh 
#----------------------------------------------------------

# Following line is necessary for multinode jobs
srun hostname -s > hosts.$SLURM_JOB_ID

starccm+ -rsh /usr/bin/ssh  -np $SLURM_NPROCS  -mpi intel -machinefile hosts.$SLURM_JOB_ID -batch \
          -power -podkey dXXXXXXXXXXXXXXXg -licpath 1999@flex.cd-adapco.com RunStarMacro.java lemans_poly_17m.amg.sim

#=========================================================
# Epilogue script to record job endtime and runtime
#=========================================================
/opt/software/scripts/job_epilogue.sh 
#----------------------------------------------------------

Fluent

/opt/software/job-scripts/fluent-slurm-multinode-exclusive.sh

#!/bin/bash

#======================================================
#
# Job script for running Fluent on multiple nodes
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the Parallel queue
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required
#SBATCH --ntasks=80
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=fluent_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load ansys/18.1

export I_MPI_FABRICS=shm:tmi
export TMI_CONFIG=/opt/software/ansys_inc/v181/commonfiles/MPI/Intel/5.1.3.223/linx64/etc/tmi.conf
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
export I_MPI_FALLBACK=no


#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh  
#------------------------------------------------------

srun hostname -s | sort | uniq > hosts.$SLURM_JOB_ID

#Initiating Fluent and reading input journal file

fluent 3d -t$SLURM_NTASKS -pib.infinipath -mpi=intel -ssh -slurm -cnf=hosts.$SLURM_JOB_ID -g -i example_1_fluent_input.txt

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh 
#------------------------------------------------------

Ansys (mechanical)

/opt/software/job-scripts/ansys-slurm-singlenode-standard.sh

#!/bin/bash

#================================================================
#
# Job script for running ANSYS on a single standard node (shared)
#
#================================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the Parallel queue
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 40)
#SBATCH --ntasks=20
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load ansys/18.1


#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------


ansys181 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i V16sp-5.dat

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------

High Memory Jobs

Generic parallel mpi job

/opt/software/job-scripts/generic-slurm-bigmem.sh

#!/bin/bash

#======================================================
#
# Job script for running a parallel job on a single node
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 80)
#SBATCH --ntasks=20
#   
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=bigmem_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge

# choose which version to load (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
module load foss/2018a

#=========================================================
# Prologue script to record job details
#=========================================================
/opt/software/scripts/job_prologue.sh 
#----------------------------------------------------------

# Modify the line below to run your program
mpirun -np $SLURM_NPROCS myprogram.exe

#=========================================================
# Epilogue script to record job endtime and runtime
#=========================================================
/opt/software/scripts/job_epilogue.sh 
#----------------------------------------------------------

Matlab

/opt/software/job-scripts/matlab-slurm-bigmem.sh

#!/bin/bash

#======================================================
#
# Job script for running matlab in parallel on a single node
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=testing
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=matlab_bigmem
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load matlab/R2018a

#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh  
#------------------------------------------------------

# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs

matlab -nodisplay -nodesktop  -r "parallel_blackjack;exit"

# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh 
#------------------------------------------------------

Info

Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).

Ansys (mechanical)

/opt/software/job-scripts/ansys-slurm-bigmem.sh

#!/bin/bash

#======================================================
#
# Job script for running ANSYS on a single (shared) big memory node
#
#======================================================

#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the Parallel queue
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 80)
#SBATCH --ntasks=20
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================

module purge
module load ansys/18.1


#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------


ansys181 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i V16sp-5.dat

#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------