@@ -36,6 +36,7 @@ must meet MFiX-Exa requirements.
.. note::
**MFiX-Exa requires CMake 3.14 or higher.**
.. _sec:build:superbuild:
SUPERBUILD Instructions (recommended)
-------------------------------------
...
...
@@ -220,3 +221,107 @@ the above command (with full path to the compilers) or the following:
MFiX-Exa uses the same compiler flags used to build AMReX, unless
``CMAKE_Fortran_FLAGS``/``CMAKE_CXX_FLAGS`` is explicitly provided, or
the environmental variables ``FFLAGS``/``CXXFLAGS`` are set.
Building MFiX-Exa for Cori (NERSC)
-----------------------------------
Standard build
~~~~~~~~~~~~~~~~~~~
For the Cori cluster at NERSC, you first need to load/unload modules required to build MFIX-Exa.
.. code:: shell
> module unload altd
> module unload darshan
> module load cmake/3.14.0
The default options for Cori are the **Haswell** architecture and **Intel** compiler, if you want to compile with the **Knight's Landing (KNL)** architecture:
.. code:: shell
> module swap craype-haswell craype-mic-knl
Or use the **GNU** compiler:
.. code:: shell
> module swap PrgEnv-intel PrgEnv-gnu
Now MFIX-Exa can be built following the :ref:`sec:build:superbuild`.
.. note::
The load/unload modules options could be saved in the `~/.bash_profile.ext`
GPU build
~~~~~~~~~~~~~~~~~~~
To compile on the GPU nodes in Cori, you first need to purge your modules, most of which won't work on the GPU nodes
.. code:: shell
> module purge
Then, you need to load the following modules:
.. code:: shell
> module load modules esslurm gcc cuda openmpi/3.1.0-ucx cmake/3.14.0
Currently, you need to use OpenMPI; mvapich2 seems not to work.
Then, you need to use slurm to request access to a GPU node:
This reservers an entire GPU node for your job. Note that you can’t cross-compile for the GPU nodes - you have to log on to one and then build your software.
Finally, navigate to the base of the MFIX-Exa repository and compile in GPU mode: