Rotating drum with MFIX 20.1.2 does not run in DMP

Hello,

We tried to run the rotating drum with moving stl on our cluster and it does not run in DMP. It runs in serial perfectly. The simulation does not crash though, it just stalls after cutting the stl and never outputs anything else (no error shown of any kind). Other simulation run in DMP with MFIX 20.1.2.
Any suggestions?

Thanks for the help!

ps: moving the stl instead of gravity is really a great improvement.

E.

How many cores are you using? It runs fine on my system with 8 cores (2x2x2 decompositin). This is a very coarse mesh so you cannot use many cores.

I had 1x1x2 only and it did not run


INFO check_data/check_geometry.f:192
Info: DES grid size:
DESGRIDSEARCH_IMAX = 20
DESGRIDSEARCH_JMAX = 20
DESGRIDSEARCH_KMAX = 20
<<<<<-----------------------------------------------------------------


| \ / | () \ \ / / | | | |
| \ / | ___ \ \ / / | | | |
| \ / |
| | \ / / | | | |
| \ / | | | \ / === | | | | ____
| |\ / /| | | | / \ | | | | | |
| | \ / | | | | / /\ \ | |
| |
| |
| | \ / | | | | / / \ \ | | | |
|
| _
/ |
| |
| // _\ || ||

=============================================================================
MFIX WITH CARTESIAN GRID IMPLEMENTATION.
RE-INDEXING IS TURNED OFF.
INITIALIZING VELOCITY NODES…
ESTIMATING POTENTIAL SCALAR CUT CELLS…
INTERSECTING GEOMETRY WITH SCALAR CELLS…
INTERSECTING GEOMETRY WITH U-MOMENTUM CELLS…
INTERSECTING GEOMETRY WITH V-MOMENTUM CELLS…
INTERSECTING GEOMETRY WITH W-MOMENTUM CELLS…
SETTING CUT CELL TREATMENT FLAGS…
COMPUTING INTERPOLATION FACTORS IN U-MOMENTUM CELLS…
COMPUTING INTERPOLATION FACTORS IN V-MOMENTUM CELLS…
COMPUTING INTERPOLATION FACTORS IN W-MOMENTUM CELLS…
COMPUTING 1/DX, 1/DY, 1/DZ FOR U-MOMENTUM CELLS…
COMPUTING 1/DX, 1/DY, 1/DZ FOR V-MOMENTUM CELLS…
COMPUTING 1/DX, 1/DY, 1/DZ FOR W-MOMENTUM CELLS…
FINDING MASTER CELLS FOR U-MOMENTUM CELLS…
FINDING MASTER CELLS FOR V-MOMENTUM CELLS…
FINDING MASTER CELLS FOR W-MOMENTUM CELLS…

MESH STATISTICS:

NUMBER OF CELLS = 8000
NUMBER OF STANDARD CELLS = 4824 ( 60.30 % of Total)
NUMBER OF CUT CELLS = 2056 ( 25.70 % of Total)
NUMBER OF FLUID CELLS = 6880 ( 86.00 % of Total)
NUMBER OF BLOCKED CELLS = 1120 ( 14.00 % of Total)

and it stays there

It runs for me with 1x1x2 decomposition. I am using gnu 6.5 and openmpi 3.1.3. Are you running through GUI (SMS or standard workflow) or from source (batch solver)?

(mfix-20.1.2) [ebreard@talapas-ln1 rotating_drum]$ cat /proc/version
Linux version 3.10.0-957.27.2.el7.x86_64 (mockbuild@x86-040.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Tue Jul 9 16:53:14 UTC 2019

to compile in DMP I use the command lines:

(mfix-20.1.2) [ebreard@talapas-ln1 rotating_drum]$ conda activate mfix-20.1.2
(mfix-20.1.2) [ebreard@talapas-ln1 rotating_drum]$ module load intel/17
(mfix-20.1.2) [ebreard@talapas-ln1 rotating_drum]$ module load intel-mpi
(mfix-20.1.2) [ebreard@talapas-ln1 rotating_drum]$ build_mfixsolver --batch --dmp
Building custom solver for rotating_drum.mfx
Running cmake command:

cmake -DENABLE_MPI=1 -G Unix Makefiles -DCMAKE_INSTALL_PREFIX=/gpfs/projects/dufeklab/ebreard/DRUM1_ERIC/rotating_drum -DUDF_DIR=/gpfs/projects/dufeklab/ebreard/DRUM1_ERIC/rotating_drum -DVERSION=20.1.2 /projects/dufeklab/ebreard/anaconda3/envs/mfix-20.1.2/share/mfix/src

– Setting build type to ‘RelWithDebInfo’ as none was specified.
– MFIX build settings summary:
– Build type = RelWithDebInfo
– CMake version = 3.17.0
– Fortran compiler =
– Fortran flags =
– ENABLE_MPI = 1
– ENABLE_OpenMP = OFF
– ENABLE_CTEST = OFF
– ENABLE_COVERAGE = OFF
– The Fortran compiler identification is GNU 4.8.5
– Check for working Fortran compiler: /usr/bin/f95
– Check for working Fortran compiler: /usr/bin/f95 - works
– Detecting Fortran compiler ABI info
– Detecting Fortran compiler ABI info - done
– Checking whether /usr/bin/f95 supports Fortran 90
– Checking whether /usr/bin/f95 supports Fortran 90 - yes
– Found MPI_Fortran: /gpfs/packages/intel/compilers_and_libraries_2017.8.262/linux/mpi/intel64/lib/libmpifort.so (found version “3.1”)
– Found MPI: TRUE (found version “3.1”)
– Found Git: /usr/bin/git (found version “1.8.3.1”)

works with GUI using mpif90 (does not with mpiifort)

just want to add that the fluid_bed_tfm_2d was ran with 80 cores. However, when trying 3D simulations the same problem mentioned above occurs. I tried hopper_dem_3d and it has a seg failure I think since nothing happens past the line number blocked cells and the simulation stops.

Using MFIX 18.1.5 I can compile in DMP with :slight_smile:
module load intel/17

module load intel-mpi

source activate mfix-18.1

build_mfixsolver --batch --dmp FC=mpiifort -DMPI_Fortran_COMPILER=mpiifort

but using the same states with 2020:

(mfix-20.1.2) [ebreard@talapas-ln1 DRUM1_ERIC]$ build_mfixsolver --batch --dmp FC=mpiifort -DMPI_Fortran_COMPILER=mpiifort
Only one of [‘FC’, ‘-DMPI_Fortran_COMPILER’] can be used.

To specify a compiler and build with MPI support:

build_mfixsolver FC=/path/to/fortran/compiler --dmp

To specify a compiler and build without MPI support:

build_mfixsolver FC=/path/to/fortran/compiler

I tried with mfix 19.3.1:
I can compile and run the hopper 3d case with :

conda activate mfix-19.3.1
module load intel/17
module load intel-mpi
env FC=mpiifort build_mfixsolver --batch --dmp

So tried with 20.1.2:

conda activate mfix-20.1.2
module load intel/17
module load intel-mpi
env FC=mpiifort build_mfixsolver --batch --dmp

it compiles but crashes, which also occurs when using :

conda activate mfix-20.1.2
module load intel/17
module load intel-mpi
build_mfixsolver --batch --dmp

or

conda activate mfix-20.1.2
module load intel/17
module load intel-mpi
env FC=mpif90 build_mfixsolver --batch --dmp

@mark.meredith is there something different about 20.1.2 compiling that we are missing?

My suggestion is some parameter does not in $OMP or parallel variable.
You might pay attention about this point.
image