The default job length is 24 hours but you can request up to 672.
There is only one queue, called CLUSTER.
There's an example job script in /info/slurm
When people leave the Chemistry department (physically leave, that is) they are automatically put into the 'external' group which isn't allowed to use more than 128 cores at once. This is done by checking the department database.
Deathstar is a cluster system and so message passing libraries are used to make code run in parallel. There are several different libraries installed. Shared memory parallelization (OpenMP, provided by the autoparallelizing options on some of the compilers) can also be used for up to sixteen cores but not further.
The available message passing libraries are MPI implementations: currently we have various versions of OpenMPI available.
To change your parallel environment you need to load the appropriate module. By default Intel MPI with Intel compilers is loaded. To switch you'd edit your .bash_profile file and change the module add line.
Compile your code with the MPI compilers. These are usually called mpicc, mpicxx, mpif77 and mpif90.
To run MPI code you normally use a command such as mpirun to launch your program. On deathstar there is a queueing system which assigns work to compute nodes, and so the MPI library has to interact with that which makes things more complicated. You use a special MPI launcher program to start your job. The launcher doesn't take any arguments but works out the correct number of cpus and which nodes to use from the queueing system. You must use the appropriate one for your MPI library, and you must have your environment configured for the library you want to use when you submit a job, as otherwise the job won't be able to find its library files. Here's a list of which launcher command goes with which library: