skip to content
 

Queues

These are the currently available queues. All the queues use the X5650 (old) nodes by default except for the queue named 'new'. The 'xl12' queue is restricted to only six jobs at a time. 

Queue            Memory CPU Time Walltime Node  Run Que Lm  State
---------------- ------ -------- -------- ----  --- --- --  -----
l12                --      --    48:00:00   --    0   0 --   E R
s96                --      --    24:00:00   --    1   1 --   E R
l48                --      --    48:00:00   --    1   0 --   E R
l24                --      --    48:00:00   --    4   2 --   E R
xl12               --      --    168:00:0   --    0   0 --   E R
new                --      --    48:00:00   --    2   1 --   E R
s192               --      --    24:00:00   --    0   0 --   E R
s240               --      --    12:00:00   --    0   0 --   E R
test               --      --    04:00:00   --    0   0 --   E R
                                               ----- -----
                                                   8     4

Here are some examples to get started.

qsub -q l48 myscript # 48 cores
qsub -q s192 myscript # 192 cores
qsub  -q l12 -l nodes=1:old:ppn=10 myscript # 10 cores on 1 X5650 node; you have to use a queue which allows more cores than you want
qsub -q new myscript # 16 cores on one E5-2650 node
qsub -q new -l nodes=2:new:ppn=16 myscript # 32 cores on two E5-2650 nodes
qsub -q test -l nodes=1:ppn=12 myscript # 12 cores on one node in the test queue (faster turnaround but only 4 hours walltime)
qsub -q test -l nodes=1:ppn=4 myscript # 4 cores on one node in the test queue

One node (currently node001, an X5650 node) is permanently assigned to the test queue to provide quick access to CPUs for short jobs, and won't run any jobs out of other queues. However if any other nodes are free and the system has more test jobs that fit onto node001 at once then the scheduler may run test jobs on some of the other nodes.

Parallel work

Cerebro is a cluster system and so message passing libraries are used to make code run in parallel. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to sixteen cores on E5-2650 nodes and twelve cores on X5650 nodes but not further.

The main MPI libraries are Intel MPI. You need to match your MPI library to your compiler if you're using Fortran so there are different libraries for different compilers.

To change your parallel environment you need to load the appropriate module. By default Intel MPI with Intel compilers is loaded. To switch you'd edit your .bashrc file and change the module add line. If you want to use different libraries in different jobs then use the module commands inside your job script to get the environment right before launching.

Compile your code with the MPI compilers. These are called mpicc, mpicxx, mpif77 and mpif90.

To run MPI code you normally use a command such as mpirun to launch your program. On cerebro there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. You use a special MPI launcher program to start your job. The launcher doesn't take any arguments but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate one for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Command name
Intel MPI mpiexec
OpenMPI mpirun

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.