skip to content
 

Queues

The only queue/partition is called MAIN. Its default walltime is 24 hours but you can request up to a maximum of 672.

See the SLURM documentation for how to request different numbers of cores and nodes.

Parallel work

Sinister is a cluster system and so message passing libraries are used to make code run in parallel. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to twelve cores but not further.

The installed MPI libraries are OpenMPI. You need to match your MPI library to your compiler if you're using Fortran so there are different OpenMPIs for different compilers.

To change your parallel environment you need to load the appropriate module. By default OpenMPI with GNU compilers is loaded. To switch you'd edit your .bashrc file and change the module add line. If you want to use different libraries in different jobs then use the module commands inside your job script to get the environment right before launching.

Compile your code with the MPI compilers. These are called mpicc, mpicxx, mpif77 and mpif90.

To run MPI code you normally use a command such as mpirun to launch your program. On sinister there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. You use a special MPI launcher program to start your job. The launcher doesn't take any arguments but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate one for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library:

MPI Command name
OpenMPI mpirun

System status 

System monitoring page

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.