Notes
- These are not the advised, recommended or even suggested build instructions.
I am simply reporting what worked for me, on my laptop, running
Ubuntu 22.04.4 LTS (Jammy Jellyfish). - I think these are all of the packages that I am using that are required to follow along:
sudo apt-get update sudo apt-get install -y make cmake sudo apt-get install -y make cmake git-all sudo apt-get install -y gcc g++ gfortran sudo apt-get install -y build-essential libopenmpi-dev openmpi-bin sudo apt-get install -y m4 sudo apt-get install -y vim
- I’m doing this in a quad core laptop, so all make’s are with
-j2
. If you have more resources available, use them. - I initially tried ascent
develop
with vtk-mv2.1.0
and conduitv0.8.8
.
I was able to compile successfully, but the code hits a runtime error due to
a missing vtk-m worklet library.
Without further ado, let’s open up a temporary directory and get started.
rm -rf tmp.build.deps
mkdir tmp.build.deps && cd tmp.build.deps
Constructive Solid Geometry (csg)
First I’ll install the dependencies for the csg-eb
library.
export CSG_INSTALL_DIR=$HOME/packages/csg-eb-deps
mkdir -p $CSG_INSTALL_DIR
gmp
wget https://ftp.gnu.org/gnu/gmp/gmp-6.2.1.tar.xz
tar -xf gmp-6.2.1.tar.xz
pushd gmp-6.2.1
./configure --prefix=$CSG_INSTALL_DIR
make -j2 install
popd
Catch2
git clone --depth 1 --branch v2.13.7 https://github.com/catchorg/Catch2
pushd Catch2/
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=$CSG_INSTALL_DIR
cd build/
make -j2 install
popd
mpfr
wget --no-check-certificate https://ftp.gnu.org/gnu/mpfr/mpfr-4.1.0.tar.xz
tar -xf mpfr-4.1.0.tar.xz
pushd mpfr-4.1.0/
./configure --with-gmp=$CSG_INSTALL_DIR --prefix=$CSG_INSTALL_DIR
make -j2 install
popd
CGAL
git clone --depth 1 --branch v5.3 https://github.com/CGAL/cgal
pushd cgal/
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=$CSG_INSTALL_DIR -DCMAKE_BUILD_TYPE=Release
cd build/
make -j2 install
popd
PEGTL
git clone --branch 3.2.2 https://github.com/taocpp/PEGTL
pushd PEGTL/
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=$CSG_INSTALL_DIR
cd build/
make -j2 install
popd
boost
wget https://boostorg.jfrog.io/artifactory/main/release/1.83.0/source/boost_1_83_0.tar.gz
tar -zxvf boost_1_83_0.tar.gz
pushd boost_1_83_0/
./bootstrap.sh
./b2 install --prefix=$CSG_INSTALL_DIR
popd
We are done building dependencies. I always super-build MFIX-Exa, so the csg-eb lib gets build right along with it. But, it’s probably not a bad idea to go ahead and build the standalone lib now too, just in case.
export CSG_LIB_DIR=$HOME/packages/csg-eb
mkdir -p $CSG_LIB_DIR
git clone https://mfix.netl.doe.gov/gitlab/exa/csg-eb.git
pushd csg-eb
mkdir build && cd build/
export Boost_INCLUDE_DIR="-I$CSG_INSTALL_DIR/include"
export CSG_DIR=$CSG_INSTALL_DIR
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:$CSG_DIR
cmake -DCMAKE_INSTALL_PREFIX=$CSG_LIB_DIR -DCMAKE_BUILD_TYPE=Release ../
make -j2 install
popd
linear solvers and multigrid methods
hypre
export HYPRE_INSTALL_DIR=$HOME/packages/hypre/v2.30.0
mkdir -p $HYPRE_INSTALL_DIR
git clone https://github.com/hypre-space/hypre.git
pushd hypre/src/
git checkout v2.30.0
./configure --prefix=$HYPRE_INSTALL_DIR --with-MPI
make -j2 install
popd
in situ visualization
conduit
export ASCENT_INSTALL_DIR=$HOME/packages/ascent/v0.9.0
mkdir -p $ASCENT_INSTALL_DIR
git clone --recursive https://github.com/LLNL/conduit.git
pushd conduit/
git checkout v0.8.6
mkdir build && cd build
cmake -S ../src -DCMAKE_INSTALL_PREFIX=$ASCENT_INSTALL_DIR -DCMAKE_BUILD_TYPE=Release -DENABLE_OPENMP=OFF -DENABLE_MPI=ON -DENABLE_CUDA=OFF
make -j2 install
popd
vtk-m
git clone --branch master https://gitlab.kitware.com/vtk/vtk-m.git
pushd vtk-m/
git checkout v1.9.0
mkdir build && cd build/
cmake -S ../ -DCMAKE_INSTALL_PREFIX=$ASCENT_INSTALL_DIR -DVTKm_ENABLE_OPENMP=OFF -DVTKm_ENABLE_MPI=ON -DVTKm_ENABLE_CUDA=OFF -DVTKm_USE_64BIT_IDS=OFF -DVTKm_USE_DOUBLE_PRECISION=ON -DVTKm_USE_DEFAULT_TYPES_FOR_ASCENT=ON -DVTKm_NO_DEPRECATED_VIRTUAL=ON -DCMAKE_BUILD_TYPE=Release
make -j2 install
popd
ascent
git clone --recursive https://github.com/Alpine-DAV/ascent.git
pushd ascent
git checkout v0.9.0
mkdir build && cd build/
cmake -S ../src -DCMAKE_INSTALL_PREFIX=$ASCENT_INSTALL_DIR -DCMAKE_BUILD_TYPE=Release -DCONDUIT_DIR=$ASCENT_INSTALL_DIR -DVTKM_DIR=$ASCENT_INSTALL_DIR -DENABLE_VTKH=ON -DENABLE_FORTRAN=OFF -DENABLE_PYTHON=OFF -DENABLE_DOCS=OFF -DBUILD_SHARED_LIBS=ON -DENABLE_GTEST=OFF -DENABLE_TESTS=OFF
make -j2 install
popd
MFIX-Exa
git clone https://mfix.netl.doe.gov/gitlab/exa/mfix.git
pushd mfix
git submodule init
git submodule update
mkdir build.deps && cd build.deps
cp <path-to-your-build-script> ./buildit.sh
./buildit.sh
where my buildit.sh
looks like
#!/bin/bash -l
## set env
export CSG_DIR=$HOME/packages/csg-eb-deps
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:$CSG_DIR
export Boost_INCLUDE_DIR="-I$CSG_DIR/include"
export HYPRE_DIR=$HOME/packages/hypre/v2.30.0
export HYPRE_ROOT=$HYPRE_DIR
export HYPRE_LIBRARIES=$HYPRE_DIR/lib
export HYPRE_INCLUDE_DIRS=$HYPRE_DIR/include
export ASCENT_DIR=$HOME/packages/ascent/v0.9.0
export CONDUIT_DIR=$ASCENT_DIR
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:$ASCENT_DIR/lib/cmake/ascent
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:$ASCENT_DIR/lib/cmake/conduit
## build mfix
cmake \
-DMFIX_MPI=yes \
-DMFIX_OMP=no \
-DMFIX_GPU_BACKEND=NONE \
-DMFIX_CSG=yes \
-DMFIX_HYPRE=yes \
-DAMReX_ASCENT=yes \
-DAMReX_CONDUIT=yes \
-DAMReX_TINY_PROFILE=yes \
-DCMAKE_BUILD_TYPE=Release \
../
make -j 2
Running a job
I’ll use bench05 as a simple test of the executable.
First let’s make sure it runs “as-is”
cd ../benchmarks/05-cyl-fluidbed/Size0001/
ln -s ../../../build.deps/mfix
mpirun -np 1 ./mfix inputs
You should see a bunch of screen output that ends with AMReX (id) finalized
.
Unfortunately, this doesn’t test any of the dependencies that we just built against.
Use your text editor of choice to modify the inputs
file to
-
use multiple grids:
The grid size is set to the maximum,1024
in each direction. And the
entire domain size is justamr.n_cell = 20 5 5
. Let’s set
amr.max_grid_size_x = 10
so that the problem is decomposed onto two grids. -
use the csg library:
Fortunately, there is already a csg file of the cylindrical EB, we just need
to enable it. Just comment in thegeometry_filename
keyword and comment out,
or delete, the amrex-native EB keywords,mfix.geometry_filename = "geometry.csg" #mfix.geometry = "cylinder" #cylinder.internal_flow = true #cylinder.radius = 0.00045 #cylinder.height = -1.0 #cylinder.direction = 0 #cylinder.center = 0.0020 0.0005 0.0005
-
use the hypre linear solvers:
This problem is simple, so amrex’s defaultbicg
bottom solvers work fine, but
let’s offload to hypre just to test the build,mac_proj.bottom_solver = hypre nodal_proj.bottom_solver = hypre
-
use ascent visualization:
Add these input keywordsmfix.ascent_int = 10 ascent.actions = "ascent_actions.yaml"
and create a simple ascent pipeline named
ascent_actions.yaml
. Here’s a simple one to
visualize the particles looks like- action: "add_scenes" scenes: s1: plots: p1: type: "pseudocolor" field: "velz" points: radius: 50.0e-6 min_value: -0.001 max_value: 0.001 color_table: name: "Default" renders: r1: image_width: 360 image_height: 720 image_prefix: "vis_p_velz_%06d" camera: position: [ 0.002, 0.0005, 0.0087] look_at: [ 0.002, 0.0005, 0.0005] up: [1, 0, 0] zoom: 2.2 bg_color: [ 1.0, 1.0, 1.0] fg_color: [ 0.0, 0.0, 0.0] render_bg: "true" shading: "enabled"
Now we’re ready to re-run. This time, use two processes and let’s pipe the std out
to a screen file,
mpirun -np 2 ./mfix inputs > screen.txt
In the output you should now be able to verify that you
-
ran in parallel by seeing
MPI initialized with 2 MPI processes
-
used the csg-eb library by seeing
mfix.geometry_filename: geometry.csg
-
if you enabled TinyProfile in the build, you should see several hypre routines being hit in the profiling at the end of the screen output, such as
HypreIJIface::run_hypre_setup()
andHypreIJIface::run_hypre_solve()
-
ran with ascent in situ visualization by seeing the printed png files, e.g.,