skip to content

For general advice on how to use SLURM see SLURM usage. This page describes odyssey's configuration.

Odyssey has two sorts of node: node-0-1 to node-0-15 have Sandy Bridge CPUs, and the rest have the slightly newer Ivy Bridge CPUs. They all have 16 cores and 64GB RAM.

Odyssey has four 'partitions':

Partition Name Who can use it Maximum time limit Other limits
ALTHORPE Members of the Althorpe group 168 hours Only runs on Althorpe nodes
GREY Members of the Grey group 168 hours Only runs on Grey nodes
GREYSHORT Members of the Grey group 4 hours Only runs on node-0-23
CLUSTER Anyone 168 hours Any node but jobs will be pre-empted by jobs submitted to the other partitions

There is no default partition, so it's always necessary to use the -p <PARTITIONNAME> flag to srun and sbatch, or else set the SLURM_PARTITION environment variable.


Parallel work


Odyssey is a cluster system and so message passing libraries are used to make code run in parallel. The recommended MPI library is  OpenMPI. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to sixteen cores but not further.

To change your parallel environment you need to load the appropriate module. At the moment we have Intel MPI and OpenMPI installed. We suggest you stick with OpenMPI unless you have a particular reason for using Intel MPI.To alter modules you'd edit your .bashrc file and change the module add line.

Compile your code with the MPI compilers (see below for their names).

To run MPI code you normally use a launcher command such as mpirun to launch your program. On odyssey there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. The launcher doesn't require any arguments about numbers of processors or nodes, but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate launcher for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Compiler Command Name Launcher Command Name
Intel MPI with GNU compilers mpicc /mpif90 mpirun
Intel MPI with Intel compilers mpiicc /mpiifort


OpenMPI with Intel compilers mpicc /mpif90 mpirun
OpenMPI with GNU compilers mpicc /mpif90 mpirun

System status 

System monitoring page

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.