Memory Leak/Ram Usage Accumulation from Tutorial 3.5 (3D Fluidized Bed Tutorial Case)

Hi everyone,

I am new to MFiX and I’ve been working with the 3D Fluidized Bed tutorial. I refined the mesh from the default number of cell settings (20, 60, 20 to 30, 90, 30) and I am trying to solve the case in parallel (over four cores).

I am seeing memory usage on my system continually accumulate (from about 3 GB to 32 GB where I need to kill the process). If I stop the simulation and resume from the most recent timestep, the memory usage returns to a low value but again gradually accumulates.

I have tried MFiX version 19.3.1 and 18.1.5, running in batch and interactive mode, and installing and running mfix through anaconda vs compiling from source, but none of these changes have fixed the memory leak issue.

I am building and running the solver using the following commands (or their equivalent for GUI vs batch runs):

  • Solver build command: build_mfixsolver -j --dmp -DMPI_Fortran_COMPILER=mpifort
  • Run command: mpirun -np 4 ./mfixsolver -f DES_FB1.mfx NODESI=1 NODESJ=4 NODESK=1

Attached is my .mfx file: 3DFB.mfx (12.6 KB)

Has anyone seen a similar issue or know what the problem might be? I am looking to eventually simulate a more complicated system and it will not be practical to continually stop and start the simulation to work around this.

1 Like

I sometimes saw a same problem but I don’t know how can I solve.

Have you find any solution to address the memory leak?

What compiler and mpi version are you using?

@mohsenclick’s thread is here:

Hi all,

I am facing the same issue, tried both in a HPC and a local machine, either using compilation optiones suggested by @mohsenclick or the the regular compilation (just --dmp -j).
HPC: gfortran 4.8.2 and openmpi 4.0.2, local machine: gfortran 9.2.1 and openmpi 3.1.3.
Tried in mfix-20.3.0 and mfix-20.3.1.

It seems that the issue appears in 3D DMP simulations, 2D and SMP seem to work fine, though haven’t tried SMP in HPC since it is a bit pointless.

Thanks.

The problem for me arose from the compiler and OpenMPI combo on my system! I changed them and now it works fine! Please read this thread: Install MFiX on computer cluster

Thanks for the reference!

I’ve been trying to update gcc and openmpi. In the local machine, I updated to openmpi 4.0.4 and it works fine now. However, in the HPC (CentOS 6.6) I’m having trouble in updating fortran compiler. I’ve chosen gcc-9.3.1 since 9.2.1 worked on my local machine. I am able to build mfixsolver in serial but when I try to run the program I get:

Fortran runtime error: Incorrect extent in VALUE argument to DATE_AND_TIME intrinsic: is -2, should be >=8

Besides, if I try to reconfigure openmpi I get:

checking if Fortran compiler works... no
**********************************************************************
* It appears that your Fortran compiler is unable to produce working
* executables.  A simple test application failed to properly
* execute.  Note that this is likely not a problem with Open MPI,
* but a problem with the local compiler installation.  More
* information (including exactly what command was given to the
* compiler and what error resulted when the command was executed) is
* available in the config.log file in the Open MPI build directory.
**********************************************************************
configure: error: Could not run a simple Fortran program.  Aborting.

I’ve reached kind of deadend here, so any help is trully welcome.

Finally installed on HPC,
for the record, gnu 9.2.0 and openmpi 4.0.5 also produces the memory leak on CentOS 6.6.