Regarding the grid numbering problem when using DMP for parallel computing

When the particle is on the decomposition line of the parallel domain, the IJK in which the particle is located cannot be found in the internal mesh.The first graph is the number of nodes in my DMP division. The third graph shows a particle on the two domain dividing lines of the X-decomposition. At this point, the IJK obtained using IJK = PIJK(NP,4) for the particle is 248, but this cannot be found in the grid number of the fluid. Similarly, the fifth image shows two particles located on the boundary of the domain of the Y-decomposition, at which point the particle is at an IJK of 402, which also cannot be found in the mesh numbering of the fluid. This will result in a case where avgDES_T_s(IJK) is 0.
image





@cgw @jeff.dietiker I am asking this question because when I use des_usr_var(13,NP)=avgDES_T_s(IJK), these particles that are on the boundary of the parallel domain appear to have avgDES_T_s(IJK)=0.This bothers me because I need to store avgDES_T_s (IJK) in the particle.Would love to hear back from you guys!

Hi @suzx_forward. We do our best to answer all questions posted to the forum. We will get to your question soon. Thanks for your patience.

Thank you so much!I look forward to your suggestions。

Can you please try to add a send/receive call at the bottom of subroutine CALC_avgTs (below the end of the IJK loop) , in file model/des/des_thermo_rad.f

  1. Put
USE sendrecv, only: send_recv

at the top of the routine, and add

     call send_recv(avgDES_T_s)

below the IJK loop.

Thank you so much Jeff, this really works. But I would also like to know what makes it work?

In DMP, each rank is handling its own set of cells, and there are ghost cells at the edge of a rank domain. The data in the ghost cells needs to be exchanged so information can pass from one rank to another. Here we forgot to do the exchange (the send/receicve call was missing). This is a bug, thanks for reporting it. We will fix in the next point release.

3 Likes

OK, thank you very much jeff for your reply.