Example Slurm Job Scripts
This page shows some example jobs scripts for various types of jobs - both single-core and parallel (multiple cores on a single node and on multiple nodes).
Note that job scripts call a "prologue" and "epilogue" scripts which simply perform some housekeeping and inserts useful information into the slurm-JOBID.out file.
Tip
Copies of the job scripts below can be found on ARCHIE at /opt/software/job-scripts, which should be used as templates (don't cut-and-paste from below).
Single-core Job Examples
Generic single-core job
/opt/software/job-scripts/generic-singlecore.sh
#!/bin/bash
#======================================================
#
# Job script for running a serial job on a single core
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (ntasks=1 for a single-core job)
#SBATCH --ntasks=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=singlecore_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#example module load command (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
module load foss/2018a
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab single-core
/opt/software/job-scripts/matlab-singlecore.sh
#!/bin/bash
#======================================================
#
# Job script for running matlab on a single core
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 1)
#SBATCH --ntasks=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:10:00
#
# Job name
#SBATCH --job-name=matlab_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2021a
#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
matlab -nodisplay -nodesktop -singleCompThread -r "parallel_blackjack;exit"
#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
-singleCompThread
If the -singleCompThread option is not supplied, Matlab will run in parallel and use all available CPU cores.
Parallel Job Examples
Generic single-node
This example is for a generic parallel program that uses mpi, running on a single node.
/opt/software/job-scripts/generic-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Run the job on one node, all cores on the same node (full node)
#SBATCH --ntasks=40 --nodes=1
#
# Run the job on a half of one node, all cores on the same node
##SBATCH --ntasks=20 --nodes=1
#
# Note: ##SBATCH (two #) means the line is commented.
# Only one of the above lines should be active (#SBATCH --ntasks= .... )
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=parallel_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Load a module which provides mpi (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
module load foss/2018a
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NTASKS myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Generic multi-node
This example is for a generic parallel program that uses mpi, running on multiple nodes.
/opt/software/job-scripts/generic-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=parallel_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Load a module which provides mpi (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
module load foss/2018a
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NTASKS myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Abaqus single-node
/opt/software/job-scripts/abaqus-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running ABAQUS on multiple cores (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=20 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=abaqus_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load abaqus/2020
unset SLURM_GTIDS
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
abaqus job=abaqus input=abaqus_input.inp cpus=$SLURM_NTASKS mp_mode=threads interactive
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Ansys single-node
/opt/software/job-scripts/ansys-parallel-singlenode.sh
#!/bin/bash
#================================================================
#
# Job script for running ANSYS on multiple cores (single node)
#
#================================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the Parallel queue
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), cores all on the same node
#SBATCH --ntasks=20 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/20.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
ansys202 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i ansys-test.dat
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Fluent single-node
/opt/software/job-scripts/fluent-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running FLUENT on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=fluent_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/20.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
srun hostname -s > hosts.$SLURM_JOB_ID
#Initiating Fluent and reading input journal file
fluent 3d -t$SLURM_NTASKS -ssh -slurm -g -i fluent-test.txt
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Fluent multi-node
/opt/software/job-scripts/fluent-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running FLUENT on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=fluent_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/20.2
export I_MPI_FABRICS=shm:tmi
export TMI_CONFIG=/opt/software/ansys_inc/v202/commonfiles/MPI/Intel/2018.3.222/linx64/etc/tmi.conf
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
export I_MPI_FALLBACK=no
export FLUENT_AFFINITY=0
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
srun hostname -s > hosts.$SLURM_JOB_ID
#Initiating Fluent and reading input journal file
fluent 3d -t$SLURM_NTASKS -pib.infinipath -mpi=intel -ssh -slurm -cnf=hosts.$SLURM_JOB_ID -g -i fluent-test.txt
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gaussian single-node
/opt/software/job-scripts/gaussian-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running GASUSSIAN on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
# The %CPU variable in the input file should be the same as ntasks
#SBATCH --ntasks=40 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gaussian_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gaussian/g16
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
#uncomment next line to override default behaviour of storing temporary files in /tmp
export GAUSS_SCRDIR=$SLURM_SUBMIT_DIR
g09 gaussian_test.com
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gromacs single-node
/opt/software/job-scripts/gromacs-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running GROMACS on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gromacs_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gromacs/intel-2018.2/2020.3-single
export OMP_NUM_THREADS=1
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
mpirun -np $SLURM_NTASKS gmx_mpi mdrun -s gromacs-test.tpr
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gromacs multi-node
/opt/software/job-scripts/gromacs-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running GROMACS on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gromacs_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gromacs/intel-2018.2/2020.3-single
export OMP_NUM_THREADS=1
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
mpirun -np $SLURM_NTASKS gmx_mpi mdrun -s gromacs-test.tpr
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab single-node
/opt/software/job-scripts/matlab-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running MATLAB on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max. of 40), all cores on the same node (full node)
#SBATCH --ntasks=40 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:10:00
#
# Job name
#SBATCH --job-name=matlab_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2021a
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs
matlab -nodisplay -nodesktop -r "parallel_blackjack;exit"
# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Info
Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).
NAMD multi-node
/opt/software/job-scripts/namd-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running NAMD on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Distribute processes in round-robin fashion for load balancing
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=namd_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load namd/intel-2016.4/2.12_mpi
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
mpirun -np $SLURM_NTASKS namd2 namd-test.conf
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
OpenFOAM multi-node
/opt/software/job-scripts/openfoam-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running OpenFOAM on multiple cores (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=openfoam_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
# choose which version to load
#module load openfoam/gcc-4.8.5/v2012
module load openfoam/intel-2018.2/v1812
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
mpirun -np $SLURM_NTASKS dsmcFoam -parallel
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
Starccm+ single-node
/opt/software/job-scripts/starccm-parallel-singlenode.sh
#!/bin/bash
#=================================================================================
#
# Job script for running StarCCM+ on multiple cores on a single node
#
#=================================================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Specify runtime (hard) (HH:MM:SS)
#SBATCH --time=04:00:00
#
# Job name
#SBATCH --job-name=starccm_test
#
# Output file
#SBATCH --output=slurm-%j.out
module purge
module load star-ccm/16.04.007-r8
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# The file RunStarMacro.java needs to be in execution directory
starccm+ -rsh /usr/bin/ssh -np $SLURM_NTASKS -mpi intel -batch -power -podkey dXXXXXXXXXXXXXXXg \
-licpath 1999@flex.cd-adapco.com RunStarMacro.java $SLURM_SUBMIT_DIR/star-ccm-test.sim
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
Starccm+ multi-node
/opt/software/job-scripts/starccm-parallel-multinode.sh
#!/bin/bash
#=================================================================================
#
# Job script for running StarCCM+ on multiple cores across multiple nodes (shared)
#
#=================================================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify runtime (hard) (HH:MM:SS)
#SBATCH --time=04:00:00
#
# Job name
#SBATCH --job-name=starccm_test
#
# Output file
#SBATCH --output=slurm-%j.out
module purge
module load star-ccm/16.04.007-r8
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Following line is necessary for multicore jobs
srun hostname -s | sort > hosts.$SLURM_JOB_ID
# The file RunStarMacro.java needs to be in execution directory
starccm+ -rsh /usr/bin/ssh -np $SLURM_NTASKS -mpi intel -machinefile $SLURM_SUBMIT_DIR/hosts.$SLURM_JOB_ID \
-batch -power -podkey dXXXXXXXXXXXXXXXg -licpath 1999@flex.cd-adapco.com RunStarMacro.java \
$SLURM_SUBMIT_DIR/star-ccm-test.sim
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
High Memory Jobs
Generic mpi job
/opt/software/job-scripts/generic-parallel-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a bigmem node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max. of 80)
#SBATCH --ntasks=20
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=bigmem_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Load a module which provides mpi (foss 2018a contains the gcc 6.4.0 toolchain & openmpi 2.12)
module load foss/2018a
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NPROCS myprogram.exe
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
Ansys
/opt/software/job-scripts/ansys-parallel-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running ANSYS on multiple cores (shared) of big memory node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the big memory partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80)
#SBATCH --ntasks=20
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/20.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
ansys202 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i ansys-test.dat
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab
/opt/software/job-scripts/matlab-slurm-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running matlab in parallel on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=testing
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=matlab_bigmem
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2018a
#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs
matlab -nodisplay -nodesktop -r "parallel_blackjack;exit"
# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID
#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Info
Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).
GPU Examples
Any GPU
This example is for requesting any type of GPU.
The sample job script below is also available at: /opt/software/job-scripts/gpu-any.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a single gpu node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=gpu
#
# Specify project account (replace as required)
#SBATCH --account=my-account-id
#
# Request any GPU
#SBATCH --gpus=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gpu_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load nvidia/sdk/21.3
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Modify the line below to run your program
python myprogram.py
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
V100
This example is for requesting the allocation of a NVidia V100 GPU.
The sample job script below is also available at: /opt/software/job-scripts/gpu-V100.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a single gpu node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=gpu
#
# Specify project account (replace as required)
#SBATCH --account=my-account-id
#
# Request a V100 GPU
#SBATCH --gres=gpu:V100
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gpu_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load nvidia/sdk/21.3
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Modify the line below to run your program
python myprogram.py
#=========================================================
# Epilogue script to record job endtime and runtime
#----------------------------------------------------------
A100
This example is for requesting the allocation of a NVidia A100 GPU.
The sample job script below is also available at: /opt/software/job-scripts/gpu-A100.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a single gpu node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=gpu
#
# Specify project account (replace as required)
#SBATCH --account=my-account-id
#
# Request a A100 GPU
#SBATCH --gres=gpu:A100
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gpu_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load nvidia/sdk/21.3
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Modify the line below to run your program
python myprogram.py
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------