MIDOSS MOHID on graham

This section describes the steps to set up and run the MIDOSS version of the MOHID code on the ComputeCanada graham.computecanada.ca HPC cluster.

Modules Setup

When working on graham, the module load command must be used to load extra software components.

You can manually load the modules each time you log in, or you can add the lines to your $HOME/.bashrc file so that they are automatically loaded upon login.

The module load commands needed are:

module load StdEnv/2016.4
module load nco/4.6.6
module load netcdf-fortran/4.4.4
module load proj4-fortran/1.0
module load python/3.8.2

Warning

MIDOSS-MOHID does not build successfully in the StdEnv/2020 that became the default on graham on 1-Apr-2021. Please ensure that you have done module load StdEnv/2016.4 prior to building MIDOSS-MOHID.

Warning

The nco/4.6.6 module is incompatible with the module load netcdf-fortran-mpi/4.4.4 module that is required to run NEMO. So, if you are running both MIDOSS-MOHID and NEMO, you will need to manually load the appropriate modules as necessary.

Create a Workspace and Clone the Tools, Code and Configurations Repositories

graham provides several different types of file storage. We use project space for our working environments because it is large, high performance, and backed up. Scratch space is even larger, also high performance, but not backed up, so we use that as the space to execute MOHID runs in, but generally move the run results to project space. Files more than 60 days old are automatically purged from scratch space on graham.

graham automatically provides environment variables that are more convenient that remembering full paths to access your project and scratch spaces:

  • Your project space is at $PROJECT/$USER/

  • Your scratch space is at $SCRATCH/

Create MIDOSS/ directory trees in your project and scratch spaces:

$ mkdir -p $PROJECT/$USER/MIDOSS/results
$ mkdir -p $SCRATCH/MIDOSS/runs

Note

If the above command fails, it may be because the symbolic link that PROJECT points to was not created when your graham account was set up. Try:

$ cd $HOME
$ ln -s $HOME/projects/def-allen project

Clone the following repositories:

$ cd $PROJECT/$USER/MIDOSS
$ git clone git@github.com:MIDOSS/Make-MIDOSS-Forcing.git
$ git clone git@github.com:MIDOSS/MIDOSS-MOHID-CODE.git
$ git clone git@github.com:MIDOSS/MIDOSS-MOHID-config.git
$ git clone git@github.com:MIDOSS/MIDOSS-MOHID-grid.git
$ git clone git@github.com:UBC-MOAD/moad_tools.git
$ git clone git@github.com:MIDOSS/MOHID-Cmd.git
$ git clone git@github.com:SalishSeaCast/NEMO-Cmd.git
$ git clone git@github.com:SalishSeaCast/grid.git SalishSeaCast-grid

Install Python Packages

Note

This method of installing the moad_tools, NEMO-Cmd, and MOHID-Cmd Python packages employs the “user scheme” for installation. It is appropriate and necessary on graham where we do not have our own Anaconda Python distribution installed. This method should not be used on EOAS work stations or other machines where you have Anaconda Python installed.

$ cd $PROJECT/$USER/MIDOSS
$ python3 -m pip install --user --editable Make-MIDOSS-Forcing
$ python3 -m pip install --user --editable moad_tools
$ python3 -m pip install --user --editable NEMO-Cmd
$ python3 -m pip install --user --editable MOHID-Cmd

You can confirm that the Make-MIDOSS-Forcing package and the make-hdf5 tool are correctly installed with the command:

$ make-hdf5 --help

from which you should see output like:

Usage: make-hdf5 [OPTIONS] YAML_FILENAME [%Y-%m-%d] [N_DAYS]

  Create HDF5 forcing files for a MIDOSS-MOHID run.

  YAML_FILENAME: File path/name of YAML file to control HDF5 forcing files
  creation.

  [%Y-%m-%d]: Date on which to start HDF5 forcing files creation.

  N_DAYS: Number of days plus 1 of HDF5 forcing to create in each file.
  Use 1 to create 2 days of forcing which is what is required for a 1 day
  MOHID run.

Options:
  --version  Show the version and exit.
  --help     Show this message and exit.

You can confirm that the moad_tools package and the hdf5-to-netcdf4 tool are correctly installed with the command:

$ hdf5-to-netcdf4 --help

from which you should see output like:

Usage: hdf5-to-netcdf4 [OPTIONS] HDF5_FILE NETCDF4_FILE

  Transform selected contents of a MOHID HDF5 results file HDF5_FILE into a
  netCDF4 file stored as NETCDF4_FILE.

Options:
  -v, --verbosity [debug|info|warning|error|critical]
                                  Choose how much information you want to see
                                  about the progress of the transformation;
                                  warning, error, and critical should be
                                  silent unless something bad goes wrong.
                                  [default: warning]
  --help                          Show this message and exit.

You can confirm that the NEMO-Cmd and MOHID-Cmd packaged and the mohid command processor are correctly installed with the command:

$ mohid --help

from which you should see output like:

usage: mohid [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug]

MIDOSS-MOHID Command Processor

optional arguments:
  --version            show program's version number and exit
  -v, --verbose        Increase verbosity of output. Can be repeated.
  -q, --quiet          Suppress output except warnings and errors.
  --log-file LOG_FILE  Specify a file to log output. Disabled by default.
  -h, --help           Show help message and exit.
  --debug              Show tracebacks on errors.

Commands:
  complete       print bash completion command (cliff)
  gather         Gather results files from a MIDOSS-MOHID run.
  help           print detailed help for another command (cliff)
  monte-carlo    Prepare for and execute a collection of Monte Carlo runs of the MIDOSS-MOHID model.
  prepare        Set up the MIDOSS-MOHID run described in DESC_FILE and print the path of the temporary run directory.
  run            Prepare, execute, and gather results from a MIDOSS-MOHID model run.

Compile MIDOSS-MOHID

Compile and link the Mohid_Base_1, Mohid_Base_2, and MohidWater parts of the MOHID Framework.

Use an interactive job on graham for compilation because it is substantially faster (≥15%). Be sure to request at least 1024 MB of memory:

$ salloc --time=0:30:0 --cpus-per-task=1 --mem-per-cpu=1024m --account=def-allen
$ cd $PROJECT/$USER/MIDOSS/MIDOSS-MOHID-CODE/Solutions/linux
$ ./compile_mohid.sh -mb1 -mb2 -mw

The output looks something like:

#### Mohid Base 1 ####
 compile mohidbase1 OK


#### Mohid Base 2 ####
 compile mohidbase2 OK


#### Mohid Water ####
 compile MohidWater OK

==========================================================================
build started:    Tue Dec 18 13:10:09 PST 2018
build completed:  Tue Dec 18 13:16:07 PST 2018

--->                  Executables ready                               <---

total 0
lrwxrwxrwx 1 dlatorne def-allen 36 Dec 18 13:16 MohidWater.exe -> ../src/MohidWater/bin/MohidWater.exe

==========================================================================

You can delete all of the compiled objects, libraries, and executables with:

$ ./compile_mohid.sh --clean

so that the next build will be “clean”; i.e. it won’t be able to include any products from previous builds.

Test MIDOSS-MOHID

The MIDOSS-MOHID-config/MarathassaConstTS/ directory contains a configuration that you can use to do a test run of your setup on graham. It is the constant temperature and salinity version of the 2014 Marathassa spill in English Bay. You should be able to run the test with:

$ cd $PROJECT/$USER/MIDOSS/MIDOSS-MOHID-config/MarathassaConstTS/
$ mohid run MarathassaConstTS.yaml $PROJECT/$USER/MIDOSS/results/MarathassaConstTS

The output looks something like:

mohid_cmd.run INFO: Created temporary run directory /scratch/dlatorne/MIDOSS/runs/MarathassaConstTS_2019-01-10T173855.512111-0800
mohid_cmd.run INFO: Wrote job run script to /scratch/dlatorne/MIDOSS/runs/MarathassaConstTS_2019-01-10T173855.512111-0800/MOHID.sh
mohid_cmd.run INFO: Submitted batch job 15523561

You can use the squeue command to monitor the status of your job:

$ squeue -u $USER
   JOBID     USER      ACCOUNT           NAME  ST START_TIME        TIME_LEFT NODES CPUS   GRES MIN_MEM NODELIST (REASON)
15656820 dlatorne def-allen_cp MarathassaCons  PD N/A                 1:30:00     1    1 (null)  20000M  (Priority)

An alias for squeue that provides more information and better formatting is:

alias sq='squeue -o "%.12i %.8u %.9a %.22j %.2t %.10r %.19S %.10M %.10L %.6D %.5C %N"'

sq -u $USER
   JOBID     USER   ACCOUNT                   NAME ST     REASON          START_TIME       TIME  TIME_LEFT  NODES  CPUS NODELIST
15656820 dlatorne def-allen      MarathassaConstTS PD   Priority                 N/A       0:00    1:30:00      1     1

The oil particle trajectories calculated by MOHID will be in the Lagrangian_MarathassaConstTS.nc file, and the oil mass balance will be in the resOilOutput.sro file.

Note

If the mohid run command prints an error message, you can get a Python traceback containing more information about the error by re-running the command with the --debug flag.

Using hdf5-to-netcdf4

Note

The mohid run command generates a MOHID.sh shell script that includes using the hdf5-to-netcdf4 command-line tool to transform a MOHID Lagrangian.hdf5 output file into a netCDF4 file. So, you generally shouldn’t need to use hdf5-to-netcdf4 by itself, but this section describes how to do so if necessary.

The hdf5-to-netcdf4 command-line tool can be used to transform a MOHID Lagrangian.hdf5 output file into a netCDF4 file. Doing so is resource intensive in terms of memory and disk i/o, so it has to be done in an interactive slurm session on graham.

Start an interactive slurm session with a command like:

$ salloc --time=00:20:0 --cpus-per-task=1 --mem-per-cpu=800m --account=def-allen

Choose the --time value to be close to what you expect to need in order to avoid having to wait too long for the session to be allocated to you. For guidance, transformation of a Lagrangian.hdf5 from a MOHID run for 7 days of model time on the SalishSeaCast domain takes anywhere from 6m30s to 17m30s, depending on how graham is operating.

Once the interactive session starts, do the transformation by:

  1. Copying the .hdf5 to fast, local SSD storage on the node

  2. Running hdf5-to-netcdf4 to store the .nc file on SSD storage

  3. Copying the .nc file back to project or scratch storage

$ cp Lagrangian.hdf5 $SLURM_TMPDIR/Lagrangian.hdf5
$ hdf5-to-netcdf4 Lagrangian.hdf5 Lagrangian.nc
$ cp $SLURM_TMPDIR/Lagrangian.nc Lagrangian.hdf5

You can prefix the hdf5 and nc file names with paths. You can get progress information from hdf5-to-netcdf4 by using the options --verbosity info or --verbosity debug. Please see hdf5-to-netcdf4 --help for details.