Particle neighbour contacts exceeded for tangential history when EB particle injection is used

Hi MFIX-Exa developers,

I am trying to reproduce a test case from Li et al. (https://doi.org/10.1016/j.powtec.2023.119155), in which a slurry is injected into the domain.

In this setup, particles enter the domain through a side embedded boundary (EB), and the simulation requires rolling friction to be enabled.

The simulation runs fine without rolling friction, but enabling tangential history (for rolling friction) causes the simulation to abort with the following error at the first time step:

3::Assertion `(contact_idx != -1)’ failed,
file “../src/des/dem/mfix_pc_interactions_K.H”, line 175
Msg: Particle neighbor contacts exceeded !!!

log.txt (10.8 KB) and

Backtrace.txt (5.8 KB)

I have attached a minimal input file to reproduce the issue:

inputs.txt (4.3 KB) and

Li_et_al_A_0.0_v1.stl (2.2 KB) for geometry

Any suggestions on how to run this simulation with rolling friction enabled would be greatly appreciated.

I’ll take a more detailed look at this later today, but for starters, just jack up your number of neighbors for tangential history and see if it runs, e.g.,

dem.tan_history.max_contacts = 100

Thanks so much for the suggestion!
I tried setting dem.tan_history.max_contacts to 100, 200, 500, and even 2000, but it still aborts with the same error.

log_max_contacts100.txt (10.8 KB)

non-update update… I set this up for myself yesterday and ran into (presumably) the same issue as you. So I expected it was a bug/something we haven’t tested previously. I set up a small reproducer today for the developers by attaching a particle inlet to a benchmark problem and it ran just fine. So now I’ll have to tease out what’s different between the original problem and reproducer that failed to reproducer the issue.

ok so it turns out my reproducer only successfully ran the 10 steps I originally asked for. When I tried to run longer today it died on step 19. I also noticed the crack case I set up could run through one step if I decreased the solids volume fraction from 3% to 0.3%. I’ll open an issue w/ the reproducer for the developers and mention in the meeting tomorrow (11am ET). call in details will be in the release notes later today if you would like to join.

Thanks so much for the update! I suspect the issue may be influenced by both the solids volume fraction and the inlet area relative to the particle diameter. Our case of interest involves a relatively high solids volume fraction (>3%) with a small inlet, so this regime is particularly important for us. It would be really helpful if this scenario could be considered while investigating the issue.

The issue has been identified and corrected. I’ve asked @wfullmer to run a validation test to confirm that the underlying model is now behaving as expected. I also made some updates to the DEM mass inflow code to better handle higher-density inflows. These improvements should be included in the next release.

1 Like

Thank you so much for the update! Looking forward to the next release and trying it out.

@jl1 26.04 was just released, please give it a try when you have the opportunity. It should fix the bug you reported related to the tangential history term on inflowing particles.

Apologies up front if this reply is a little confusing but I’m going to post what I have as it may be of some value to you. I looked up the reference and set this case up for myself and tried to take some notes along the way of why I made certain decisions/options/etc. But then I ran into the same issue reported and by the time @jmusser fixed it I was on to several other things and completely forgot what I had intended to say about this case. I did have some comments on the geometry jotted down:

  • It seems like this domain should be set based off the depth, which is Lz = 9mm. I’m going to discretize that by nz = 8 fluid cells. dx = 9/8 = 1.125 mm, dx/dp = 1.125/0.3 = 3.75
  • The height Ly = 0.1 m isn’t a nice factor of Lz, Ly/Lz = 11.111. I’m going to choose Ly = 12Lz, ny = 96 and center the box (crack) in y. This makes the first (bottom wall adjacent) cell ~44.4% of a whole cell, ie over 50% covered. This may not be ideal b/c it looks like you’re interested in this boundary. So you might want to adjust this up/down so that the first cell is full. But making the geometry off-center in the domain may slow down the MLMG solver.
  • I don’t see the dimensions of the inlet. it looks like the height is about Lz, so I’m going to use Lz. It looks like the length is about 1.5Lz but the particles are magically appearing halfway along it? So I’m going to take some liberties with this one…
  • The length of the crack is not nice factor of Lx either, Lx = 0.2, Lx/Lz = 22.222. So take ceil((Lx/dx)/16) = 12 → set nx = 12*16 = 192, Lx = 192dx = 0.216. Again we have the question of should we center it or align the left edge of the box with a cell boundary… I’m going to center it for now and consider the other option later.

The inputs including an OpenSCAD geometry are attached in this zip file
inputs.zip (4.1 KB)

Interestingly, for my setup, I don’t see the particles stacking up in the inlet region as in Jordan’s video above. I believe he also tweaked your inputs a little bit, maybe less so than I did. If he shares that too it will give you a couple things to look into. Let us know if you have any other issues. For now, I’m going to mark this as solved by the latest release.

I’ve attached the case files I ran for reference. Like @wfullmer, I made a number of modifications—and I’ve lost track of all the details—but these are the two major ones that matter for this discussion:

  1. Switched the geometry to a CSG file instead of using STL.
    I was getting a GPU‑side error from the STL reader, and converting the geometry to CSG ended up being the quickest workaround.

  2. Slightly modified the domain size so that the cell counts are powers of two:
    amr.n_cell = 256 16 128
    This allows the MLMG solver to fully coarsen. Since I was running on a single GPU, I changed the setup to a single grid. If you’re planning to run with MPI across CPUs, you’d probably want to reduce the grid size for better load balancing.

forum_06627.csg (200 Bytes)
inputs.forum.06627.txt (4.5 KB)