skip to content
 

This document is for anyone who has to manage cerebro. There's not much in here for end users.


Day to day stuff


Adding users


Cerebro picks up its user accounts from the Active Directory. Make the user an AD account if they don't have one already and then add them to the 'cerebro-users' group which can be found in the Alavi container. A cron job runs once per hour and inserts copies of any new user accounts in 'cerebro-users' into cerebro's local OpenLDAP system. It doesn't do Unix groups other than the personal group yet. Passwords are checked directly with the AD and not stored locally. The cerebro-users group includes the alavi group so all members of that group also get accounts.


Alavi group computer reps can edit the group membership using the Delegated Management System.


Give the user a copy of or point them at cerebro user notes. If they haven't used any of the local clusters before, also give them the Theory sector Maui/Torque introduction to get them going.


Torque/Maui


See the local Maui admin guide. The queue setup is very basic.


Node access control for jobs is completely open. Any user can log into any node. The Torque prologue script makes the local /scratch directories on nodes as needed.


Parallel tools


You can do a great deal from the cmgui cluster management GUI, or the command line equivalent cmsh. cmgui has a bad habit of rewriting system files that you don't want it to (eg the queueing system config), so best to only use it for things like powering on and off.


Startup and shutdown


The nodes are on IPMI and can be powered up and down from the head node.



# cmsh -c "device; power on -c default"
# cmsh -c "device; power off -n node002"


Documentation


There are user docs on the web at cerebro's pages, some stuff in the filesystem at /cm/shared/docs/cm.


Updating software



# yum update # even kernel updates are safe now
# yum --installroot /cm/images/default-image update


Don't forget to reboot if you need to activate new kernels.


Adding software


Use yum to see if you can get the package as part of the OS.


Generally best to put 3rd party applications under /usr/local where they are NFS-shared to the nodes, and any modules to /usr/local/modulefiles.


If you need to add something to every compute node that can't go in /usr/local you need to add it using the parallel tools and then edit the node install image so that reinstalled nodes still get it. This lives under /cm/images/default-image/.



yum --installroot /cm/images/default-image install foobar


After doing this reboot a node and check it all still works. Nodes resync with the image on every boot. Or you can get fancy with the Cluster Manager GUI (cmgui) by making test images and putting one node on the test image.


Dealing with problems


Reinstalling the nodes


The nodes sync with the node image on boot so just reboot any misbehaving ones if you suspect software problems.


Hardware problems


If you need to remove a node tell PBS first:



pbsnodes -o nodeXXX.cm.cluster # yes, you need the full name. Do the other quads in the chassis too.
checknode nodeXXX # wait til there are no reservations on all four


Once a node is back



pbsnodes -c nodeXXX.cm.cluster


Backups


On plea.


Tech support


support@clustervision.com, quote id 110630


Other useful information


Hardware


The disks on the head node are arranged as a RAID1 and a RAID6. Hobbit keeps an eye on it.


Software


Chunks of /usr/local/shared are synced from the network every day to pick up new versions of compilers.


You can use the web interface to the IPMI cards on the nodes by starting firefox on the head node and pointing it at node0XX.ipmi.cluster . And logging in with the right password, of course.

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.