skip to content
 

Job Classes

Most people call job classes 'queues' but in reality everything goes into one big queue.

There are short classes (24 hours), long classes (168 hours), huge classes (erm, probably about a month) and the test class (60 minutes). Classes are named s1, s16, s32, l1, l16, l32, h1, h16, h32. The test class is called 'test'. It has a high priority and one compute core is permanently reserved for it.

The number of processors available in each class is just a default and you can set it differently. Classes only exist as a convenient shorthand for requesting popular combinations of walltime and processors. If you prefer to specify everything yourself, feel free, although maximum walltime limits are enforced on each class. Examples below.

qsub -q l16 # 16 cores on 1 node for one week, the default for l16, which is probably what you want
qsub -q l16 -l nodes=1:ppn=10 # 10 cores on 1 node for one week
qsub -q l32 # 32 cores, the default for l32, 16 on each of 2 nodes for one week
qsub -q l16 -l nodes=4 # 4 cores for one week, any way the scheduler pleases to assign them
qsub -q l16 -l walltime=48:00:00 # 16 cores on 1 node for 48 hours. Shorter walltimes get higher priority.

Parallel work

Dexter is a cluster system and so message passing libraries are used to make code run in parallel. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to sixteen cores but not further.

The installed MPI libraries are OpenMPI. You need to match your MPI library to your compiler if you're using Fortran so there are different OpenMPIs for different compilers.

To change your parallel environment you need to load the appropriate module. By default OpenMPI with Intel compilers is loaded. To switch you'd edit your .bashrc file and change the module add line. If you want to use different libraries in different jobs then use the module commands inside your job script to get the environment right before launching.

Compile your code with the MPI compilers. These are called mpicc, mpicxx, mpif77 and mpif90.

To run MPI code you normally use a command such as mpirun to launch your program. On dexter there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. You use a special MPI launcher program to start your job. The launcher doesn't take any arguments but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate one for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Command name
OpenMPI mpirun

 

Requesting particular cpu types

Dexter has two sorts of compute node: there are 52 with 32GB RAM and Westmere CPUs, and eight with 64GB RAM and Haswell CPUs. Both types have 16 cores per node. By default you will get whichever node type is available first, but if you want a job to run on a particular type of node here are some examples:

qsub -q l16 -l nodes=1:westmere:ppn=16 # 16 cores on 1 Westmere node for one week
qsub -q l16 -l nodes=1:haswell:ppn=16 # 16 cores on 1 Haswell node for one week
qsub -q s32 -l nodes=2:haswell:ppn=16 # 32 cores on 2 Haswell nodes for 24 hours

Scheduler policy

Jobs are prioritised by a combination of owner fairshare and job expansion factor. Fairshare is a measure of how much compute time the job owner has had recently. The more time they have had, the lower their priority is. Job expansion factor is roughly equivalent to the time the job has spent waiting on the queue, but it grows faster for shorter jobs than longer jobs. Fairshare is weighted much more highly than expansion factor.

One core is permanently reserved for the 'test' class. 

There is a cap on the total outstanding compute time any one user can have in running jobs. Currently this is set to be quite high: a single person can occupy half the machine for seven days. 

External users (defined by their entry in the Chemistry Admin Database) are restricted to 25% of the cores at any one time. Jobs which would cause more than this to be allocated are blocked. Who is and isn't an external user can be overidden by the PIs. 

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.