Example Slurm Job Scripts
This page shows some example jobs scripts for various types of jobs - both single-core and parallel (multiple cores on a single node and on multiple nodes).
Note that job scripts call a "prologue" and "epilogue" scripts which simply perform some housekeeping and inserts useful information into the slurm-JOBID.out file.
Tip
Copies of the job scripts below can be found on ARCHIE at /opt/software/job-scripts, which should be used as templates (don't cut-and-paste from below).
Single-core Job Examples
Generic single-core job
/opt/software/job-scripts/generic-singlecore.sh
#!/bin/bash
#======================================================
#
# Job script for running a serial job on a single core
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (ntasks=1 for a single-core job)
#SBATCH --ntasks=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=singlecore_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Example module load command.
#Load any modules appropriate for your program's requirements
module load fftw/gcc-14.2.1/3.3.10
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab single-core
/opt/software/job-scripts/matlab-singlecore.sh
#!/bin/bash
#======================================================
#
# Job script for running matlab on a single core
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 1)
#SBATCH --ntasks=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:10:00
#
# Job name
#SBATCH --job-name=matlab_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2024b
#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
matlab -nodisplay -nodesktop -singleCompThread -r "parallel_blackjack;exit"
#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
-singleCompThread
If the -singleCompThread option is not supplied, Matlab will run in parallel and use all available CPU cores.
Parallel Job Examples
Generic single-node
This example is for a generic parallel program that uses mpi, running on a single node.
/opt/software/job-scripts/generic-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Run the job on one node, all cores on the same node (full node)
#SBATCH --ntasks=40 --nodes=1
#
# Run the job on a half of one node, all cores on the same node
##SBATCH --ntasks=20 --nodes=1
#
# Note: ##SBATCH (two #) means the line is commented.
# Only one of the above lines should be active (#SBATCH --ntasks= .... )
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=parallel_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Example module load command.
#Load any modules appropriate for your program's requirements
module load openmpi/gcc-14.2.1/4.1.8
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NTASKS myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Generic multi-node
This example is for a generic parallel program that uses mpi, running on multiple nodes.
/opt/software/job-scripts/generic-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:20:00
#
# Job name
#SBATCH --job-name=parallel_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
#Example module load command.
#Load any modules appropriate for your program's requirements
module load openmpi/gcc-14.2.1/4.1.8
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NTASKS myprogram.exe
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Abaqus single-node (mp_mode=threads)
/opt/software/job-scripts/abaqus-parallel-singlenode-threads.sh
#!/bin/bash
#======================================================
#
# Job script for running ABAQUS on multiple cores (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=1 --cpus-per-task=20
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=abaqus_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load abaqus/2020
unset SLURM_GTIDS
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
abaqus job=abaqus input=abaqus_input.inp cpus=$SLURM_NTASKS mp_mode=threads interactive
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Abaqus single-node (mp_mode=mpi)
/opt/software/job-scripts/abaqus-parallel-singlenode-mpi.sh
#!/bin/bash
#======================================================
#
# Job script for running ABAQUS on multiple cores (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=20 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=abaqus_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load abaqus/2020
unset SLURM_GTIDS
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
abaqus job=abaqus input=abaqus_input.inp cpus=$SLURM_NTASKS mp_mode=mpi interactive
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Ansys single-node
/opt/software/job-scripts/ansys-parallel-singlenode.sh
#!/bin/bash
#================================================================
#
# Job script for running ANSYS on multiple cores (single node)
#
#================================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the Parallel queue
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), cores all on the same node
#SBATCH --ntasks=20 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/24.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
ansys242 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i ansys-test.dat
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Fluent single-node
/opt/software/job-scripts/fluent-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running FLUENT on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=fluent_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/25.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
#Initiating Fluent and reading input journal file
fluent 3d -t$SLURM_NTASKS -ssh -slurm -g -i fluent-test.txt
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Fluent multi-node
/opt/software/job-scripts/fluent-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running FLUENT on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=fluent_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/25.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
srun --mpi=pmi2 hostname -s | sort > hosts.$SLURM_JOB_ID
#Initiating Fluent and reading input journal file
fluent 3d -t$SLURM_NTASKS -pib.infinipath -mpi=intel -ssh -slurm -cnf=hosts.$SLURM_JOB_ID -g -i fluent-test.txt
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gaussian single-node
/opt/software/job-scripts/gaussian-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running GASUSSIAN on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
# The %CPU variable in the input file should be the same as ntasks
#SBATCH --ntasks=20 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gaussian_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gaussian/g16
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
#uncomment next line to override default behaviour of storing temporary files in /tmp
export GAUSS_SCRDIR=$SLURM_SUBMIT_DIR
g16 gaussian_test.com
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gromacs single-node
/opt/software/job-scripts/gromacs-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running GROMACS on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gromacs_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gromacs/gcc-14.2.1/2025.3
export OMP_NUM_THREADS=1
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
mpirun -np $SLURM_NTASKS gmx_mpi mdrun -s gromacs-test.tpr
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Gromacs multi-node
/opt/software/job-scripts/gromacs-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running GROMACS on multiple cores across multiple nodes (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=gromacs_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load gromacs/gcc-14.2.1/2025.3
export OMP_NUM_THREADS=1
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
mpirun -np $SLURM_NTASKS gmx_mpi mdrun -s gromacs-test.tpr
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab single-node
/opt/software/job-scripts/matlab-parallel-singlenode.sh
#!/bin/bash
#======================================================
#
# Job script for running MATLAB on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max. of 40), all cores on the same node (full node)
#SBATCH --ntasks=40 --nodes=1
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=00:10:00
#
# Job name
#SBATCH --job-name=matlab_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2024b
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs
matlab -nodisplay -nodesktop -r "parallel_blackjack;exit"
# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Info
Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).
NAMD multicore (single node)
/opt/software/job-scripts/namd-parallel-multicore.sh
#!/bin/bash
#======================================================
#
# Job script for running NAMD on multiple cores on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of cores required (between 1 and 40), all cores on the same node
#SBATCH --ntasks=1 --cpus-per-task=40
#
# Distribute processes in round-robin fashion for load balancing
#SBATCH --distribution=cyclic
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=namd_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load namd/v3_2025-10-14
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
#note +p at the execution line has to be the same as --cpus-per-task
namd3 +p40 namd-test.conf > namd-test.$SLURM_JOB_ID-out
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
OpenFOAM multi-node
/opt/software/job-scripts/openfoam-parallel-multinode.sh
#!/bin/bash
#======================================================
#
# Job script for running OpenFOAM on multiple cores (shared)
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# No. of tasks required (max of 80), cores might be spread across various nodes
#SBATCH --ntasks=80
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=openfoam_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
# choose which version to load
module load openfoam/gcc-14.2.1/v2506
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
mpirun -np $SLURM_NTASKS dsmcFoam -parallel
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
Starccm+ single-node
/opt/software/job-scripts/starccm-parallel-singlenode.sh
#!/bin/bash
#=================================================================================
#
# Job script for running StarCCM+ on multiple cores on a single node
#
#=================================================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 40), all cores on the same node
#SBATCH --ntasks=40 --nodes=1
#
# Specify runtime (hard) (HH:MM:SS)
#SBATCH --time=04:00:00
#
# Job name
#SBATCH --job-name=starccm_test
#
# Output file
#SBATCH --output=slurm-%j.out
module purge
module load star-ccm/16.04.007-r8
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# The file RunStarMacro.java needs to be in execution directory
starccm+ -rsh /usr/bin/ssh -np $SLURM_NTASKS -mpi intel -batch -power -podkey dXXXXXXXXXXXXXXXg \
-licpath 1999@flex.cd-adapco.com RunStarMacro.java $SLURM_SUBMIT_DIR/star-ccm-test.sim
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
Starccm+ multi-node
/opt/software/job-scripts/starccm-parallel-multinode.sh
#!/bin/bash
#=================================================================================
#
# Job script for running StarCCM+ on multiple cores across multiple nodes (shared)
#
#=================================================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80), cores might be spread across various nodes (nodes will be shared)
#SBATCH --ntasks=80
#
# Specify runtime (hard) (HH:MM:SS)
#SBATCH --time=04:00:00
#
# Job name
#SBATCH --job-name=starccm_test
#
# Output file
#SBATCH --output=slurm-%j.out
module purge
module load star-ccm/16.04.007-r8
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
# Following line is necessary for multicore jobs
srun hostname -s | sort > hosts.$SLURM_JOB_ID
# The file RunStarMacro.java needs to be in execution directory
starccm+ -rsh /usr/bin/ssh -np $SLURM_NTASKS -mpi intel -machinefile $SLURM_SUBMIT_DIR/hosts.$SLURM_JOB_ID \
-batch -power -podkey dXXXXXXXXXXXXXXXg -licpath 1999@flex.cd-adapco.com RunStarMacro.java \
$SLURM_SUBMIT_DIR/star-ccm-test.sim
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------
High Memory Jobs
Generic mpi job
/opt/software/job-scripts/generic-parallel-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running a parallel job on a bigmem node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem # # Specify project account #SBATCH --account=my-account-id # # No. of tasks required (max. of 80) #SBATCH --ntasks=20 # # Distribute processes in round-robin fashion #SBATCH --distribution=cyclic # # Specify (hard) runtime (HH:MM:SS) #SBATCH --time=01:00:00 # # Job name #SBATCH --job-name=bigmem_test # # Output file #SBATCH --output=slurm-%j.out #======================================================
module purge
#Load a module which provides mpi
module load openmpi/gcc-14.2.1/4.1.8
#=========================================================
# Prologue script to record job details
# Do not change the line below
#=========================================================
/opt/software/scripts/job_prologue.sh
#--------------------------------------------------------
# Modify the line below to run your program
mpirun -np $SLURM_NPROCS myprogram.ex
#=========================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#========================================================= /opt/software/scripts/job_epilogue.sh #----------------------------------------------------------
Ansys
/opt/software/job-scripts/ansys-parallel-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running ANSYS on multiple cores (shared) of big memory node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the big memory partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=my-account-id
#
# Distribute processes in round-robin fashion
#SBATCH --distribution=cyclic
#
# No. of tasks required (max of 80)
#SBATCH --ntasks=20
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=ansys_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load ansys/24.2
#======================================================
# Prologue script to record job details
# Do not change the line below
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
ansys242 -np $SLURM_NTASKS -b nolist -j ansys_slurm -i ansys-test.dat
#======================================================
# Epilogue script to record job endtime and runtime
# Do not change the line below
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Matlab
/opt/software/job-scripts/matlab-slurm-bigmem.sh
#!/bin/bash
#======================================================
#
# Job script for running matlab in parallel on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=bigmem
#
# Specify project account
#SBATCH --account=testing
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=matlab_bigmem
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load matlab/R2024b
#======================================================
# Prologue script to record job details
#======================================================
/opt/software/scripts/job_prologue.sh
#------------------------------------------------------
# necessary for housekeeping of parallel matlab jobs
export MATLAB_PREFDIR=~/.matlab/slurm_jobs/$SLURM_JOB_ID/prefs
matlab -nodisplay -nodesktop -r "parallel_blackjack;exit"
# clean-up
rm -rf ~/.matlab/slurm_jobs/$SLURM_JOB_ID
#======================================================
# Epilogue script to record job endtime and runtime
#======================================================
/opt/software/scripts/job_epilogue.sh
#------------------------------------------------------
Info
Note the --exclusive option above. By default, Matlab will utilise all available cores if you rely on Matlab's multithreaded libraries. If you are explicitly parallelising using ParFor, then you can control the number of processes (in this case you may require to remove --exclusive).
GPU Examples
See examples on our GPU page.