Abnormal particle contact time in DEM simulations with DMP parallelization

test.zip (37.4 KB)

Hello everyone,

I am currently working on a CFD–DEM coupled model for particle sintering at high temperature. In my sintering force model, the key variable is the contact (sintering) time between particle pairs, so it is essential to correctly track the particle–particle contact duration.

My implementation follows the way the van der Waals (VDW) force is handled in calc_force_dem.Specifically, I modified the neighbor-pair data structure PFT_NEIGHBOR(:, CC) used in the DEM neighbor search and added an extra component to store the particle-pair contact time:

PFT_NEIGHBOR(4, CC) = T_CONTACT

Here, T_CONTACT represents the accumulated contact time for each particle pair.
For debugging and visualization, I also store the contact time in DES_USR_VAR_1.

After extensive testing, I observed the following:

  1. Serial runs and DMP builds executed on a single core behave correctly:

    • The contact time (tcontact) is accumulated as expected. (see Fig. 1)

    • The resulting cohesive (sintering) force is physically reasonable.

  2. Multi-core DMP parallel runs show abnormal behavior:

    • At DMP subdomain boundaries (tested with DMP = 3 × 3 × 1), the contact time tcontact is unexpectedly reset.

    • This leads to discontinuities in the cohesive force.

    • In Fig. 2, DES_USR_VAR_1 clearly shows the abnormal reset of particle contact time across DMP boundaries.

To isolate the issue, I have tried the following:

  • Modified mpi_pack_des_mod and mpi_unpack_des_mod to include the new contact-time variable, but this did not resolve the problem.

  • Tested a simplified case where tcontact is replaced by a prescribed function (i.e., not accumulated from particle-pair contact history).
    In this case, DMP parallel runs work correctly, with no abnormal behavior.

Based on these results, I believe the issue is specifically related to the data exchange of particle-pair contact time (tcontact) across DMP subdomains, rather than the force model itself.

This issue has been troubling me for quite a long time. I would greatly appreciate any guidance from the community or the developers on how this problem should be handled properly, or whether there exists a simpler and more robust approach to record and update the particle-pair contact time in DEM simulations, especially under DMP parallelization.

I have attached my test case below.
The abnormal behavior can be observed within 2–3 minutes of runtime.
All my code modifications are clearly marked with **** yq **** comments for easy identification.

Thank you very much for your time and help.
Finally, I wish everyone a Merry Christmas :christmas_tree:

Dear developers,
I am sorry to follow up on this topic. I have not yet been able to resolve the issue, and I would sincerely appreciate it if you could kindly take a look and share any insights when you have time. Thank you very much.

Ok I see your post here, yeah you are right, the CC logic for DMP parallel has problems. The way you can try is to construct an array contact_time_array(:,:), then for a particle ii and its neighbor particle jj, contact_time_array(ii,jj) = T_contact; Then you need to go to mpi_pack and mpi_unpack to let the rank pass this array to other rank.

2 Likes

Hi Dr. Ke,

Thank you again for your helpful reply. I have a follow-up question to further clarify your suggestion.

When you mentioned constructing contact_time_array(:,:) and setting contact_time_array(ii,jj) = T_CONTACT, my initial attempt was to build a global dense (N * N) matrix over all particles to store the contact time for each particle pair. However, as the number of particles increased, the memory requirement of such a matrix became prohibitively large and eventually caused the simulation to crash. This was the reason why I later tried to rely on MFIX’s neighbor-based approach using neighbor(CC) and PFT_NEIGHBOR.

In your suggestion, contact_time_array(:,:) stores the contact time between particle ii and its neighbor particle jj. My question is: if this array is not based on the existing neighbor list (neighbor(CC)), and also not implemented as a global (N * N) matrix indexed by particle IDs, how would you recommend identifying the neighbor particle jj of particle ii and organizing this contact_time_array in practice?

Additionally, is there any existing variable or array structure in MFIX that you would recommend as a reference for implementing such a pair-based data structure (especially one that is safely communicated under DMP via mpi_pack / mpi_unpack)?

This issue has been troubling me for quite a long time, and I sincerely hope you might be able to offer some guidance.

Best regards

In addition, I have also tested this issue using the newer MFIX v25.3, and the attached files contain the corresponding test cases.

test.zip (33.0 KB)

there is something like
cc_start = neighbor_index(ii-1)
cc_end = neighbor_index(ii)
do cc = cc_start, cc_end
then you should find jj = neighbors(cc) if particle jj is the neighbor of particle ii. you can see this treatment in calc_force_dem.f

In dmp, this ii and jj are all local particle index on ranks.
When you have particle inflow, you have to grow the global N*N matrix so at a point you will have memory issue due to N^2.

One thing you can try is to separate and flatten the matrix,
that’s say you have particles ii, jj, kk, with there neighbor pair ii-jj, ii-kk, jj-kk, then you can construct
neighbor_pair_array = [ii,jj,ii,kk,jj,kk]
contact_time_array = [time for ij, time for ik, time for jk]
then maintain those two arrays during your simulation, of course you need to find a way to pass the info between ranks. Just avoid rely on pass CC between ranks.

or you can find a way map the CC between ranks, ex. CC_rank0 → CC_rank1 , then you can keep updating the PFT_NEIGHBOR on each rank.

I could be wrong but this is what I currently have in mind.

2 Likes

Hi Dr. Ke,

Thank you very much for the detailed explanation. I will try implementing this approach as you suggested, and I will continue the discussion here if I encounter further questions.