skip to content

Volkhan is a cluster of 34 dual-processor quad core Intel servers. The processors are Xeon E5420s and each machine has 8Gb of RAM. They all run Linux.

Volkhan can only be accessed by sshing into the head node, whose external name is All work is done from there; there is no need to log into compute nodes. Volkhan uses the local Admitto service for passwords, so you log in with the same password as on the workstations.

Homespace is 250 GB in size and is an LVM on a 2.8TB RAID6 array (along with shared scratch). The quotas on the home file system are set to 20GB soft limit and 25GB hard limit. It is backed up nightly and two weeks of incrementals are kept. /home is shared to all nodes on the cluster's internal network, so your job sees the same home directory wherever it is on the machine. It's important to remember that from a compute job's point of view accessing this directory is extremely slow, especially if all the compute nodes are trying at once. Compute jobs should always write data to a local disk if possible, and copy it back to /home at the end.

There is also a shared scratch filesystem /sharedscratch in which you will have a directory. These are not backed up but have no quota restriction. At the moment I have no plans to purge them regularly, so please clean up old files when you're done with them. They have the same speed issue as /home.

Each node also has a local /scratch filesystem on which you will have your own directory. These filesystems are about 108GB in size with no quota restriction and are the most appropriate place for your jobs to write temporary files during a run. They are local to each node and so considerably faster than the NFS-mounted /home and /sharedscratch. Please clean up files on /scratch when you are done with them; see the queueing documentation for how to find out which node's /scratch to look at.  All of the node /scratch directories are accessible under /nodescratch on the head node as /nodescratch/comp-0-1 and so on. The system uses an automounter so the directories only appear when you reference them explictly, for example by doing 'ls /nodescratch/comp-0-20'.

Finally, volkhan also has a some filesystems which originally came from an older cluster which can be found under /sharedscratch/mek-quake-filestore . They are not backed up and new users on the machine don't automatically get access to them.

The following software is installed: Intel Fortran, Intel C, Intel Math Kernel Library, Portland Group Fortran and C compilers, GNU compilers, OpenMPI. The head node also has other software, including popular editors, as it is intended for interactive work. If there is a package missing from the head node that you would like to use then please ask; it will probably be possible to install it provided it is a sensible size.

Volkhan is intended for parallel work and has several different parallel environments installed. Like most local clusters it has the modules environment to allow you to switch between different compilers and libraries. The default environment is set up with no modules loaded.

If you want to change this then use the module avail command to see what the other options are, and edit the version you want into your ~/.bashrc file.

For example to build and run a program using Intel C and OpenMPI you could do the following:
module load icc/64/2013/5 mpi/openmpi/intel13/1.8.1
This would load the intel C compiler and intel openmpi libraries into your environment, allowing you to use mpicc etc.


All compute jobs should be run through the queueing system as users cannot log directly into the nodes. The queueing system will run each job on a set of free compute nodes, copying the output back to a user-specified file at the end of the job. The queueing system is SLURM. Read the generic instructions for instructions on how to use SLURM and Volkhan's queueing setup for volkhan-specific details. 

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.