Getting Started

Downloading the code

First, make sure that git is installed on your machine.

Then download the ERF repository by typing:

git clone https://github.com/erf-model/ERF.git

Or, to automatically include the necessary submodules when downloading ERF, type:

git clone --recursive https://github.com/erf-model/ERF.git

Building

The ERF code is dependent on AMReX, and uses the radiation model (RTE-RRTMGP) which is based on YAKL C++ implementation for heterogeneous computing infrastructure (which are all available as submodules in the ERF repo). ERF can be built using either GNU Make or CMake, however, if radiation model is activated, only CMake build system is supported.

Minimum Requirements

ERF requires a C++ compiler that supports the C++17 standard and a C compiler that supports the C99 standard. Building with GPU support may be done with CUDA, HIP, or SYCL. For CUDA, ERF requires versions >= 11.0. For HIP and SYCL, only the latest compilers are supported. Prerequisites for building with GNU Make include Python (>= 2.7, including 3) and standard tools available in any Unix-like environments (e.g., Perl and sed). For building with CMake, the minimal requirement is version 3.18.

Note

While ERF is designed to work with SYCL, we do not make any guarantees that it will build and run on your Intel platform.

GNU Make

The GNU Make system is best for use on large computing facility machines and production runs. With the GNU Make implementation, the build system will inspect the machine and use known compiler optimizations explicit to that machine if possible. These explicit settings are kept up-to-date by the AMReX project.

Using the GNU Make build system involves first setting environment variables for the directories of the dependencies of ERF (AMReX, RTE-RRTMGP, and YAKL); note, RTE-RRTMGP, and YAKL are only required if running with radiation. All dependencies are provided as git submodules in ERF and can be populated by using git submodule init; git submodule update in the ERF repo, or before cloning by using git clone --recursive <erf_repo>. Although submodules of these projects are provided, they can be placed externally as long as the <REPO_HOME> environment variables for each dependency is set correctly. An example of setting the <REPO_HOME> environment variables in the user’s .bashrc is shown below:

export ERF_HOME=${HOME}/ERF
export AMREX_HOME=${ERF_HOME}/Submodules/AMReX

The GNU Make system is set up to use the path to AMReX submodule by default, so it is not necessary to set these paths explicitly, unless it is desired to do so. It is also possible to use an external version of AMReX, downloaded by running

git clone https://github.com/amrex-codes/amrex.git

in which case the AMREX_HOME environment variable must point to the location where AMReX has been downloaded, which will take precedence over the default path to the submodule. If using bash shell,

export AMREX_HOME=/path/to/external/amrex

or if using tcsh,

setenv AMREX_HOME /path/to/external/amrex
  1. cd to the desired build directory, e.g. ERF/Exec/IsentropicVortex/

  2. Edit the GNUmakefile; options include

    Option name

    Description

    Possible values

    Default value

    COMP

    Compiler (gnu or intel)

    gnu / intel

    None

    USE_MPI

    Whether to enable MPI

    TRUE / FALSE

    FALSE

    USE_OMP

    Whether to enable OpenMP

    TRUE / FALSE

    FALSE

    USE_CUDA

    Whether to enable CUDA

    TRUE / FALSE

    FALSE

    USE_HIP

    Whether to enable HIP

    TRUE / FALSE

    FALSE

    USE_SYCL

    Whether to enable SYCL

    TRUE / FALSE

    FALSE

    USE_NETCDF

    Whether to enable NETCDF

    TRUE / FALSE

    FALSE

    USE_HDF5

    Whether to enable HDF5

    TRUE / FALSE

    FALSE

    USE_MOISTURE

    Whether to enable moisture

    TRUE / FALSE

    FALSE

    USE_WARM_NO_PRECIP

    Whether to use warm moisture

    TRUE / FALSE

    FALSE

    USE_MULTIBLOCK

    Whether to enable multiblock

    TRUE / FALSE

    FALSE

    DEBUG

    Whether to use DEBUG mode

    TRUE / FALSE

    FALSE

    PROFILE

    Include profiling info

    TRUE / FALSE

    FALSE

    TINY_PROFILE

    Include tiny profiling info

    TRUE / FALSE

    FALSE

    COMM_PROFILE

    Include comm profiling info

    TRUE / FALSE

    FALSE

    TRACE_PROFILE

    Include trace profiling info

    TRUE / FALSE

    FALSE

    Note

    Do not set both USE_OMP and USE_CUDA to true.

    Information on using other compilers can be found in the AMReX documentation at https://amrex-codes.github.io/amrex/docs_html/BuildingAMReX.html .

  3. Make the executable by typing

    make
    

    The name of the resulting executable (generated by the GNUmake system) encodes several of the build characteristics, including dimensionality of the problem, compiler name, and whether MPI and/or OpenMP were linked with the executable. Thus, several different build configurations may coexist simultaneously in a problem folder. For example, the default build in ERF/Exec/Isntropic will look like ERF3d.gnu.MPI.ex, indicating that this is a 3-d version of the code, made with COMP=gnu, and USE_MPI=TRUE.

Job info

The build information can be accessed by typing

./ERF*ex --describe

in the directory where the executable has been built.

CMake

CMake is often preferred by developers of ERF; CMake allows for building as well as easy testing and verification of ERF through the use of CTest which is included in CMake.

Compiling with CMake involves an additional configure step before using the make command and it is expected that the user has cloned the ERF repo with the --recursive option or performed git submodule init; git submodule update in the ERF repo to populate its submodules.

ERF provides example scripts for CMake configuration in the /path/to/ERF/Build directory. Once the CMake configure step is done, the make command will build the executable.

An example CMake configure command to build ERF with MPI is listed below:

cmake -DCMAKE_BUILD_TYPE:STRING=Release \
      -DERF_ENABLE_MPI:BOOL=ON \
      -DCMAKE_CXX_COMPILER:STRING=mpicxx \
      -DCMAKE_C_COMPILER:STRING=mpicc \
      -DCMAKE_Fortran_COMPILER:STRING=mpifort \
      .. && make

Typically, a user will create a build directory in the project directory and execute the configuration from said directory (cmake <options> ..) before building. Note that CMake is able to generate makefiles for the Ninja build system as well which will allow for faster building of the executable(s).

Analogous to GNU Make, the list of cmake directives is as follows:

Option name

Description

Possible values

Default value

CMAKE_BUILD_TYPE

Whether to use DEBUG

Release / Debug

Release

ERF_ENABLE_MPI

Whether to enable MPI

TRUE / FALSE

FALSE

ERF_ENABLE_OPENMP

Whether to enable OpenMP

TRUE / FALSE

FALSE

ERF_ENABLE_CUDA

Whether to enable CUDA

TRUE / FALSE

FALSE

ERF_ENABLE_HIP

Whether to enable HIP

TRUE / FALSE

FALSE

ERF_ENABLE_SYCL

Whether to enable SYCL

TRUE / FALSE

FALSE

ERF_ENABLE_NETCDF

Whether to enable NETCDF

TRUE / FALSE

FALSE

ERF_ENABLE_HDF5

Whether to enable HDF5

TRUE / FALSE

FALSE

ERF_ENABLE_MOISTURE

Whether to enable moisture

TRUE / FALSE

FALSE

ERF_ENABLE_WARM_NO_PRECIP

Whether to use warm moisture

TRUE / FALSE

FALSE

ERF_ENABLE_MULTIBLOCK

Whether to enable multiblock

TRUE / FALSE

FALSE

ERF_ENABLE_RADIATION

Whether to enable radiation

TRUE / FALSE

FALSE

ERF_ENABLE_TESTS

Whether to enable tests

TRUE / FALSE

FALSE

ERF_ENABLE_FCOMPARE

Whether to enable fcompare

TRUE / FALSE

FALSE

Perlmutter (NERSC)

Recall the GNU Make system is best for use on large computing facility machines and production runs. With the GNU Make implementation, the build system will inspect the machine and use known compiler optimizations explicit to that machine if possible. These explicit settings are kept up-to-date by the AMReX project.

For Perlmutter at NERSC, look at the general instructions for building ERF using GNU Make, and then you can initialize your environment by loading these modules:

module load PrgEnv-gnu
module load cudatoolkit

Then build ERF as, for example (specify your own path to the AMReX submodule in ERF/Submodules/AMReX):

make -j 4 COMP=gnu USE_MPI=TRUE USE_OMP=FALSE USE_CUDA=TRUE AMREX_HOME=/global/u2/d/dwillcox/dev-erf/ERF/Submodules/AMReX

Finally, you can prepare your SLURM job script, using the following as a guide:

#!/bin/bash

## specify your allocation (with the _g) and that you want GPU nodes
#SBATCH -A m4106_g
#SBATCH -C gpu

## the job will be named "ERF" in the queue and will save stdout to erf_[job ID].out
#SBATCH -J ERF
#SBATCH -o erf_%j.out

## set the max walltime
#SBATCH -t 10

## specify the number of nodes you want
#SBATCH -N 2

## we use the same number of MPI ranks per node as GPUs per node
#SBATCH --ntasks-per-node=4

## assign 1 MPI rank per GPU on each node
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=map_gpu:0,1,2,3

# the -n argument is (--ntasks-per-node) * (-N) = (number of MPI ranks per node) * (number of nodes)
srun -n 8 ./ERF3d.gnu.MPI.CUDA.ex inputs_wrf_baseline max_step=100

To submit your job script, do sbatch [your job script] and you can check its status by doing squeue -u [your username].

Running

The input file specified on the command line is a free-format text file, one entry per row, that specifies input data processed by the AMReX ParmParse module.

This file needs to be specified along with the executable as an argv option, for example:

mpirun -np 64 ./ERF3d.xxx.yyy.ex inputs

Also, any entry that can be specified in the inputs file can also be specified on the command line; values specified on the command line override values in the inputs file, e.g.:

mpirun -np 64 ./ERF3d.gnu.DEBUG.MPI.ex inputs amr.restart=chk0030 erf.use_gravity=true

See Inputs for details on run-time options that can be specified.

# To be added later #.. include:: tutorials.rst

Testing and Verification

Testing and verfication of ERF can be performed using CTest, which is included in the CMake build system. If one builds ERF with CMake, the testing suite, and the verification suite, can be enabled during the CMake configure step.

An example cmake configure command performed in the Build directory in ERF is shown below with options relevant to the testing suite:

cmake -DCMAKE_INSTALL_PREFIX:PATH=./install \
      -DCMAKE_BUILD_TYPE:STRING=Release \
      -DERF_ENABLE_MPI:BOOL=ON \
      -DCMAKE_CXX_COMPILER:STRING=mpicxx \
      -DCMAKE_C_COMPILER:STRING=mpicc \
      -DCMAKE_Fortran_COMPILER:STRING=mpifort \
      -DERF_ENABLE_FCOMPARE:BOOL=ON \
      -DERF_ENABLE_TESTS:BOOL=ON \
      -DERF_USE_CPP:BOOL=ON \
      ..

While performing a cmake -LAH .. command will give descriptions of every option for the CMake project. Descriptions of particular options regarding the testing suite are listed below:

ERF_ENABLE_FCOMPARE – builds the fcompare utility from AMReX as well as the executable(s), to allow for testing differences between plot files

ERF_ENABLE_TESTS – enables the base level regression test suite that will check whether each test will run its executable to completion successfully

Building the Tests

Once the user has performed the CMake configure step, the make command will build every executable required for each test. In this step, it is highly beneficial for the user to use the -j option for make to build source files in parallel.

Running the Tests

Once the test executables are built, CTest also creates working directories for each test within the Build directory where plot files will be output, etc. This directory is analogous to the source location of the tests in Tests/test_files.

To run the test suite, run ctest in the Build directory. CTest will run the tests and report their exit status. Useful options for CTest are -VV which runs in a verbose mode where the output of each test can be seen. -R where a regex string can be used to run specific sets of tests. -j where CTest will bin pack and run tests in parallel based on how many processes each test is specified to use and fit them into the amount of cores available on the machine. -L where the subset of tests containing a particular label will be run. Output for the last set of tests run is available in the Build directory in Tests/Temporary/LastTest.log.

Adding Tests

Developers are encouraged to add tests to ERF and in this section we describe how the tests are organized in the CTest framework. The locations (relative to the ERF code base) of the tests are in Tests. To add a test, first create a problem directory with a name in Exec/<prob_name>. This problem directory is meant for a production run where the simulation is run until convergence or a solution is developed. This problem setup could comprise of a more complex physics than the corresponding tests for regression at Tests/test_files/<test_name>. Prepare toned down versions of the input file(s) for each combination of physics that a regression test is desired. For example, TaylorGreenVortex problem with input file Exec/TaylorGreenVortex/inputs_ex solves an advection-diffusion problem. The corresponding regression tests are driven by the input files Tests/test_files/TaylorGreenAdvecting/TaylorGreenAdvecting.i and Tests/test_files/TaylorGreenAdvectingDiffusing/TaylorGreenAdvectingDiffusing.i.

Any file in the test directory will be copied during CMake configure to the test’s working directory. The input files meant for regression test run only until a few time steps. The reference solution that the regression test will refer to should be placed in Tests/ERFGoldFiles/<test_name>. Next, edit the Exec/CMakeLists.txt and Tests/CTestList.cmake files, add the problem and the corresponding tests to the list. Note that there are different categories of tests and if your test falls outside of these categories, a new function to add the test will need to be created. After these steps, your test will be automatically added to the test suite database when doing the CMake configure with the testing suite enabled.