skip to content
 

Queues

There are a few different 'queues' on the machine. 'Queues' are really just shortcuts for different job configurations and all jobs go into the same pool for scheduling, where their priority is determined by their owner's recent CPU use. Queues whose names start with 'l' have a maximum time of 168 hours and their names are l1, l16, l32, and l64. They give you 1, 16, 32, and 64 processors respectively. There are also four hour queues called 's1' and 's16' for testing.

Odyssey's queueing system splits the nodes into two sets, one set for the Althorpe group and one set for the Grey group. It knows which group each of the users is in and will send jobs to the correct set of nodes automatically. If a job won't run despite there being free nodes then it is worth checking whether those free nodes belong to your group. showres -n will tell you which nodes are assigned to who. The Grey group have assigned one of their nodes for testing only, so that node is only allowed to run jobs from the 's1' or 's16' queues that belong to the Grey group. 

Nodes are half 2.7GHz Sandy Bridge and half the newer 2.6GHz Ivy Bridge cpus. Each group has eight of each type of node. 

If you want a number of processors other than the available default queues then this is possible (up to 128 processors, because that's how many each group has assigned to it.) Some examples of how to request different numbers:

qsub -q l16 # 16 cores on 1 node, the default for l16, which is probably what you want
qsub -l nodes=1:ppn=10 # 10 cores on 1 node
qsub -q l64 # 64 cores as four nodes with 16 cores on each
qsub -l nodes=8:ppn=16 # 128 cores as eight nodes with 16 cores on each
qsub -q l32 -l nodes=2:ivy:ppn=16 # 32 cores on Ivy Bridge nodes
qsub -q l32 -l nodes=2:sandy:ppn=16 # 32 cores on Sandy Bridge nodes

Parallel work

Odyssey is a cluster system and so message passing libraries are used to make code run in parallel. The installed MPI library is Intel MPI. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to sixteen cores but not further.

To change your parallel environment you need to load the appropriate module. At the moment we have Intel MPI and OpenMPI installed. We suggest you stick with Intel MPI unless you have a particular reason for using OpenMPI.To alter modules you'd edit your .bashrc file and change the module add line.

Compile your code with the MPI compilers (see below for their names).

To run MPI code you normally use a launcher command such as mpirun to launch your program. On odyssey there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. The launcher doesn't require any arguments about numbers of processors or nodes, but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate launcher for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Compiler Command Name Launcher Command Name
Intel MPI with GNU compilers mpicc /mpif90 mpirun
Intel MPI with Intel compilers mpiicc /mpiifort

mpirun

OpenMPI with Intel compilers mpicc /mpif90 mpirun
OpenMPI with GNU compilers mpicc /mpif90 mpirun

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.