The monitored velocity on one inlet planes are not equal to the given ones

Hi everyone

I am simulating a 3D lab-scale pulsating fluidized bed. The bottom of the bed is divided into five parts. One of them is the pulsating inlet and the others are inlets with a given inlet velocity.The pulsating velocity is defined according to a tutorial (Pulsating fluidized bed).

For the plane where the pulsating inlet velocity is defined, the monitored velocity should be equal to the value specified in an external txt file.

image

However, the velocity shown on the monitor is less than the above given value (23.0954m/s) at t=0.2s.

Funnily enough, the velocity on some of the inlet planes also change and the monitored values deviate the specified value which should be 0.51m/s. It can be seen that some of them even become over one.

The simulation runs on a Linux system (CentOS7) with DMP. The version of the solver is MFIX-21.1.4. And, the monitored velocities are also wrong without DMP.

I would like to know what cause(s) the above problems and how to solve the them.

The attached are my case files.
Case.zip (50.3 KB)

This is the result in Paraview. The plane is very close to the inlet boundary at the bottom (y=1.0e-16). It can be seen that the center region marked by a read square is the pulsating inlet region. The velocity_y at this area should be near 23.094, and the velocity_y outside this square region should be close to 0.51. However, the simulated value show some divergence, especially the value in the middle pulsating inlet region.

If the case is built with SMP, the following bugs are reported by the solver after building.

/home/feiyang/anaconda3/envs/mfix-21.1.4/lib/python3.7/site-packages/mfixgui/pymfix_webserver.py:44: UserWarning: The ‘environ[‘werkzeug.server.shutdown’]’ function is deprecated and will be removed in Werkzeug 2.1.
shutdown() # will finish current request, then shutdown
Previous MFiX run is resumable. Reset job to edit model
MFiX process has stopped
Ready
project_version = ‘84’
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/geometry.stl
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/Leina_02.mfx
project_version = ‘85’
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/geometry.stl
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/Leina_02.mfx
nodesj = 1
project_version = ‘86’
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/geometry.stl
Saving /ncsfs02/feiyang/Simulation/Leina_02_005_little/Leina_02.mfx
Starting env OMP_NUM_THREADS=54 /ncsfs02/feiyang/Simulation/Leina_02_005_little/mfixsolver -s -f /ncsfs02/feiyang/Simulation/Leina_02_005_little/Leina_02.mfx
At line 112 of file /home/feiyang/anaconda3/envs/mfix-21.1.4/share/mfix/src/model/dmp_modules/gridmap_mod.f (unit = 777, file = ‘fort.777’)
Fortran runtime error: End of file

Program received signal SIGABRT: Process abort signal.

Backtrace for this error:
** Something went wrong while running addr2line. **
** Falling back to a simpler backtrace scheme. **
#11 0x7fe1fbe712d2
#12 0x7fe1fbe71a0e
#13 0x7fe24f3343ff
#14 0x7fe24f334387
#15 0x7fe24f335a77
#16 0x7fe24f376f66
#17 0x7fe24f37d473
#18 0x7fe1fbf37f35
#19 0x7fe1fbf37f79
#20 0x7fe1fbe737a8
#21 0x7fe1fbe70f6e
Number of SMP threads: 54
MFiX (21.1.4 ) simulation on host: k186
Run name: LEINA_02 Time: 11: 5 Date: 3- 9-2022
Project version: 86
Memory required: 9.00 Mb

INFO open_files.f:374
The following mesh file was not found: LEINA_02.msh
Mesh generation will be performed and a mesh file will be saved.

The following mesh file was successfully opened: LEINA_02.msh
Reading gridmap from gridmap.dat…
/ncsfs02/feiyang/Simulation/Leina_02_005_little/mfixsolver: line 3: 31743 Aborted (core dumped) env PYTHONPATH="/ncsfs02/feiyang/Simulation/Leina_02_005_little":"":${PYTHONPATH:+:$PYTHONPATH} /home/feiyang/anaconda3/envs/mfix-21.1.4/bin/python3.7 -m mfixgui.pymfix “$@”

There is an discrepancy with the way monitor coordinates are converted to cell indices. It is not consistent with how this is done with boundary conditions. Here some cells that should belong to the IN_JET regions are being counted in the adjacent monitors. This is why you see less than the 23.0954m/s in the center region and more in some of the adjacent regions. This will be fixed in 22.1 coming up by the end of March.

Another issue is that when you run in parallel (DMP mode) you need to broadcast the new value of bc_v_g because it is only computed by process 0.

The SMP error seems to come from gridmap.dat. This file should not be used for SMP, only for DMP as long as it is consistent with the number of cores and partition you use.

Thank you Jeff. How to broadcast the new value of bc_v_g? Could you give me an example.

Use the BCAST routine outside the mype==pe_io conditional, something like:

! Interpolate data only in root process
         if(mype == pe_io) then
           call interpolate_keyframe_data(time)
           do i = 1, kf_count
             bc_v_g(i) = kf_data(i)%var_current(1)
           end do
         end if
         CALL BCAST(bc_v_g)
         call set_bc0_vel_inflow(1)

and make sure you have

use mpi_utility, only: BCAST

at the top of the subroutine:

       SUBROUTINE USR1
              
       USE USR
       use set_bc0_flow_mod, only: set_bc0_vel_inflow
       use bc, only: bc_v_g
       use run, only: time
       use mpi_utility, only: BCAST
  
       IMPLICIT NONE

In general, you should never assume a udf will work as is when running in parallel (DMP or SMP). It should always be carefully reviewed and adapted to run in parallel. This requires knowledge of parallel programming. Sometimes a simple data broadcast is sufficient, sometimes it is much more involved.

4 Likes

Much thanks Jeff. These codes are very helpful.

Hi Jeff

I have a supplementary question. I also edited and revised calc_force_dem.f, calc_collision_wall_mod.f and darg_gs.f according to my cases. In these code files, variables are broadcasted through !$omp. Some phrases representing parallel such as !$omp do and !$omp atomic, !$omp end do and !$omp end parallel are included. Is there anything else that needs to be changed if cases are simulated with DMP?

SMP and DMP are different parallel strategies. The !$omp directives are not broadcasting variables. Before you modify the code, you need to get familiar with the SMP and/or DMP programming (I would choose DMP if needing to run on a large number of cores).