skip to content
 

The NVIDIA HPC SDK C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®.

These are the successor compilers to the PGI compilers aka the Portland Compiler Suite

Availability: 

All managed Linux workstations.

Compute clusters with recent enough operating systems.

Instructions for users: 

Load the appropriate module. The name of the module is nvhpc. You can see all the available versions with module av nvhpc. You can easily switch between different versions (see the modules documentation for details). The module cannot be loaded at the same time as the pgi module, as they are actually the same piece of software: Nvidia bought the PGI compilers and rebranded them.

The actual compilers are called nvfortran, nvc, and nvc++ . The old PGI compiler names still work too.

A slightly different version of the module, nvhpc-cuda, is available for those who want to use CUDA acceleration. This is only useful if you have access to an Nvidia GPU to run the code on.

The compilers have several built in versions of Nvidia's CUDA library for running code on GPUs available. If you compile on a machine with a GPU the compiler will automatically select a version of CUDA that is compatible with the detected GPU. However if you are compiling on a machine without a GPU (for example a cluster where the compute nodes have GPUs but the head node does not) then the latest CUDA available will be used automatically. If that's not what you wanted, override with the -gpu=cudaX.Y flag. Or compile on a compute node.

Because the compilers have built in versions of CUDA the nvcc CUDA compiler is also available when they're loaded. However the version of nvcc that comes as part of the Nvidia HPC SDK doesn't have an easy way to select which CUDA version to use; if you need to use nvcc it's easier to control things by using the version from one of the 'cuda' modules instead.

Documentation: 

The compiler comes with manpages. You may need to load the module to make the manpage available. However most of the documentation is on the web at https://docs.nvidia.com/hpc-sdk/index.html

Admin notes: 

Cannot move the compiler between systems (it detects glibc etc versions and configures itself accordingly), so install separately for each OS.

Installing with script

Put the downloaded .tar.gz file in a scratch directory on a 64-bit Linux workstation running the target OS. Get the version with three CUDA versions built in; the one with just one doesn't always match our drivers. Make sure the NFS server is mounted read-write.

 /usr/local/shared/sbin/install_nvhpc filename.tar.gz 

This will install the package for that version of the image. Then manually add modules to the appropriate workstation image package. There are two modules to add: nvhpc and nvhpc-cuda.

The installer will output a message about creating a modulefile but it is not appropriate for us because it sets LD_LIBRARY_PATH.

Libraries

To make the compilers work properly with new style modules for libraries I have to have the installer add a siterc file in the bin directories.

Compute clusters

The script above can also install a version for the compute clusters by passing extra flags. Run as root as the script will remount the NFS server as read-write.

/usr/local/shared/sbin/install_nvhpc filename.tar.gz  server yes

'server' means 'install the server version' and 'yes' means 'install server modules'. Has been proofed (RT230569) - and needs to run as root. Yeah!

CUDA versions

As touched on above, the compilers will use a suitable CUDA version if a GPU is detected, but otherwise choose the newest available. This is a problem on the pat cluster where the head node has no GPU and the compute node GPUs are older (Kepler) ones that only work with CUDA up to 10.2 . Ansible has been set up to generate a compiler config file that forces CUDA 10.2 on pat.

System status 

System monitoring page

Can't find what you're looking for?

Then you might find our A-Z site index useful. Or, you can search the site using the box at the top of the page, or by clicking here.