Environment Modules
There are a variety of software packages, compilers and libraries installed, each of which require different environment variables and paths to be set up. This is handled via environment modules. At any time a module can be loaded or unloaded and the user's environment will be automatically updated so as to be able to use the desired software and libraries.
Info
If the user requires a module which does not exist please contact support@archie-west.ac.uk to enable installation by ARCHIE-WeSt Support Team.
Tip: modules should be loaded from within job scripts
We recommend that module load commands are included in job scripts and not added to the user's .bashrc file.
Users should load specific modules from within their jobs scripts. This is so that if a job script is preserved along with data from a job, it is obvious as to which version of the software was used for that run.
If the user wishes to load a default set of modules from their .bashrc file, then any module load commands in a job script should be preceded by "module purge".
Basic Commands
Modules can be loaded from the terminal or module load commands can be added to job scripts. In the first case, those modules will only be effective for jobs launched from the same terminal instance. Loading modules from a jobscript is preferable because it ensures that the appropriate modules are loaded at runtime. Note that the order of loading modules might be important in some cases, and this should be tested from the terminal before adding them to a job-script.
Command | Action |
---|---|
module avail | lists available modules |
module load | load specified module |
module list | list loaded modules |
module rm | removes specified module |
**module purge ** | removes all the modules |
Examples
Listing all modules
Example
The command:
module avail
will list all available modules.
-------------------------------- /opt/software/modules --------------------------------------
R/gcc-8.5.0/4.1.2 comsol/6.0 julia/gcc-8.5.0/1.7.1
R/intel-2018.2/3.5.0 comsol/6.1 lammps/intel-2018.2/22Aug18
R/intel-2020.4/3.6.1 cp2k/intel-2020.4/8.2.0_psmp lammps/intel-2019.3/5Jun19
RStudio/2022.02.1 crest/intel-2020.4/2.12 lammps/intel-2020.4/22Aug2018
SU2/intel-2019.3/7.0.5 dftb+/intel-2020.4/21.1 lammps/intel-2020.4/23Jun2022
UCNS3D/intel-2019.3/20200616 dftb+/intel-2020.4/22.1 lammps/intel-2020.4/29Sep2021
Listing all module variations for a particular package
Example
The command:
module avail ansys
will list all available versions of ansys.
-------------------------------------------- /opt/software/modules --------------------------
ansys/21.1 ansys/22.1 ansys/22.2 (D)
Loading and Unloading Modules
Example
The command:
module load namd/intel-2020.4/2.15_multicore
would load the NAMD software ver. 2.15 (multicore version, compiled with the Intel compiler version 2020.4) while the command:
module rm namd/intel-2020.4/2.15_multicore
would unload this module.
Unloading All Modules
Example
To unload all loaded modules:
module purge
then the command:
module list
would give:
No Modulefiles Currently Loaded
Replacing Package Version
Example
If the user already loaded ansys ver. 21.1 the command:
module list
would list the module as ansys/21.1
Currently Loaded Modules:
1) ansys/21.1
To replace it by more recent version first the user can use the module swap command:
module swap ansys/21.1 ansys/22.2
This will report:
The following have been reloaded with a version change:
1) ansys/21.2 => ansys/22.2
Now the command:
module list
would list the module as ansys/22.2
Currently Loaded Modules:
1) ansys/22.2
Job script module commands
Module commands can be inserted within jobs scripts and this is the recommended method for loading modules for jobs.
However, if a user has a default set of modules loaded from their bashrc file, or from the terminal from which they are submitting their job, then "module purge" should be used as in the example below:
#!/bin/bash
#======================================================
#
# Job script for running OpenFOAM on a single node
#
#======================================================
#======================================================
# Propogate environment variables to the compute node
#SBATCH --export=ALL
#
# Run in the standard partition (queue)
#SBATCH --partition=standard
#
# Specify project account
#SBATCH --account=testing
#
# No. of tasks required (max. of 40)
#SBATCH --ntasks=40
#
# Ensure the node is not shared with another job
#SBATCH --exclusive
#
# Specify (hard) runtime (HH:MM:SS)
#SBATCH --time=01:00:00
#
# Job name
#SBATCH --job-name=openfoam_test
#
# Output file
#SBATCH --output=slurm-%j.out
#======================================================
module purge
module load openfoam/intel-2018.2/v1712
#=========================================================
# Prologue script to record job details
#=========================================================
/opt/software/scripts/job_prologue.sh
#----------------------------------------------------------
mpirun -np $SLURM_NPROCS dsmcFoam -parallel
#=========================================================
# Epilogue script to record job endtime and runtime
#=========================================================
/opt/software/scripts/job_epilogue.sh
#----------------------------------------------------------