skip to content


Venus's queueing system is set up to favour parallel jobs. Running serial jobs may be evicted and requeued if higher priority parallel jobs are submitted. There are short queues (24 hours), long queues (72 hours) and the test queue (30 minutes). Queues are named s1, s6, s12, s24, s32 and l1, l6...etc. The test queue is called 'test'. The number of processors available in the parallel queues is just a default and you can set it differently. Examples below.

qsub -q l12 # 12 cores on 1 node, the default for l12, which is probably what you want
qsub -q l12 -l nodes=1:ppn=10 # 10 cores on 1 node
qsub -q l24 # 24 cores, the default for l24, 12 on each of 2 nodes
qsub -q l12 -l nodes=4 # 4 cores, 1 on each of 4 nodes (not recommended)

Types of node

venus has two types of node with different CPU types. The better nodes are assigned to jobs first. To ensure you get assigned only the better nodes you can request only to have 'westmere' nodes:

qsub -q s12 -l nodes=westmere:ppn=12 

The older nodes have deliberately been set up so that the queueing system thinks they have fewer cores than they do; this is to compensate for their lack of RAM. So no matter what node you end up on there should be nearly 2GB RAM available per assigned core. Should you want to select old nodes, replace 'westmere' with 'nehalem' and only ask for up to four cores per node. 

Because the two types of node have different CPU types it's possible to generate very optimized code that can't run on the older ones. If you get an 'illegal instruction' error, that's probably what's happened.

Parallel work

Venus is a cluster system and so message passing libraries are used to make code run in parallel. There are several different libraries installed. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to twelve cores but not further.

The available message passing libraries are MPI implementations: currently we have Intel MPI and OpenMPI.

To change your parallel environment you need to load the appropriate module. By default Intel MPI with Intel compilers is loaded. To switch you'd edit your .bash_profile file and change the module add line.

Compile your code with the MPI compilers. These are usually called mpicc, mpicxx, mpif77 and mpif90.

To run MPI code you normally use a command such as mpirun to launch your program. On venus there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. You use a special MPI launcher program to start your job. The launcher doesn't take any arguments but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate one for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Command name
OpenMPI mpirun

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.