I have run the attached file in serial and it works fine but when I use the DMP and domain decomposition method, it will diverge. It does not depends on the number of the nodes that I use. Even using 1 node with mpirun command leads to divergence.
I use the following commands to build the solver and run:
**build_mfixsolver -DCMAKE_BUILD_TYPE=Debug --batch --dmp -j -DMPI_Fortran_COMPILER=mpifort -DCMAKE_Fortarn_FLAGS="-o3 -g -fcheck=all"
mpirun -np 96 ./mfixsolver -f filename.mfx
I have run other cases on dmp without any problem using the above commands. So, what is the problem?
1st-kamrakova.mfx (13.3 KB)
Hi Mohsen. I know it’s been quite a while since you posted this case but I just came across it again while scanning the forum.
First of all you are not setting the compiler flags correctly:
It’s “Fortran” not “Fortarn” and CMake isn’t smart enough to complain. Also the ‘-o’ flag needs to be uppercase (’-O3’).
This however does not account for the reported divergence.
I ran this case in serial mode and it runs but progresses very slowly:
As you can see it took over 4 hours of real time to reach 1 second of simulation time.
Running it in DMP mode with 2x2x1 decomposition, I did see a divergence because the timestep dt
went below the lower limit of 1e-7
.
Changing dt_min
to 1e-8
made it possible to run the job in DMP mode:
and you can see that it is running about 50% faster than serial mode (simulation time ~0.6s at 6000s of real time vs ~0.4s for serial).
I’m not completely sure why dt
reached a lower value for the DMP run, or if this is a bug or an expected behavior. If I find out any more about this I’ll let you know.
– Charles