SQP method using particles_input. data imports the particle position, but cannot be successfully read

With all FPE (overflow/zero-division) checks disabled (invalid operations will result in NaN or +/-Inf as appropriate, per IEEE754)

This is quite interesting - it took about 20 minutes to simulate 2 seconds, and there were no crashes, just some overlap is too big! messages.

Thes disc-shaped particles are spinning very fast. As entertaining as it is to watch, I’m not sure how physical it is. It might be correct. Particles tend to find their happy place and hover there, spinning madly, but eventually they drift off or are hit by another particle.

If you look closely, going frame-by-frame, you can find a few things like this:

two particles have completely intersected in a physically impossible manner. This is perhaps to be expected, given the high spin rate - we might have just missed the collision between them. dt is 0.01 sec, it’s possible that reducing max_dt would help with these overlaps (at the cost of slowing down the simulation). But since FPE checks are disabled, this was non-fatal - the simulation continued regardless.

This tells me that perhaps we need to reconsider the way we handle FPE’s are handled in MFiX. Disabling them altogether seems unwise - but for sections of code like the SQP code where we know large values are common (due to corners and other singularities) it might be easier to disable FPE checks than to rewrite the code to avoid doing things like (huge_number) ** 2 - checking all the inputs to all calculations would be a lot of work and could also slow down the code substantially.

Or perhaps we could add a control to the Numerics pane to enable/disable the various type of FPE check.

Next things to try:

  1. Change shape exponents
  2. Add more particles

Here’s a plot of simulation time vs. real time. Note that when particles are colliding, the slope is much lower. With fewer collisions to check, it speeds up.

Also I found that unfortunately there was a mistake in my previous runs - I had inadvertently put in the bounding diameter for m=2, n=6rather than m=2, n=8. Oops. (As I said, having to specify this is error-prone). The numbers are very close, 0.012851 vs. 0.012788 so perhaps this was not too big of an error. Running again with the correct value gives fairly similar results:


(note that the range of the colorbar for y-velocity has been expanded, so that small velocities are no longer over-emphasized)
and there are still a few overlaps:

Next: change exponent from 8 to 6

Ok, changing n=8 to n=6 and updating the bounding diameter did not make any difference, other than changing the shape of the particles slightly (more rounded corners, flat faces a little less flat). It still dies with an FPE if overflow checks are off and runs with checks disabled. And we still have particle overlaps. So I don’t think resticting the shape exponent to values <= 6 is the solution, at least not in this case.

Thank you very much for your patient reply!@cgw
I will study carefully to see if I can solve the problem of excessive particle overlap. Thank you again for your help!

Thank you for Mr. Gao’s reply. I will give it a try.

I’m not sure the occasional overlaps are something you can solve … I believe this is due to the high rotational speed.

For reference - to turn off FPE checks, currently you must edit the file $CONDA_PREFIX/share/mfix/src/model/gfortran_init.c
and comment out or remove the line

   _gfortran_set_fpe(13);

and recompile.

Note that you must NOT “Copy the file to source directory for editing” but must edit it in-place. This means that ALL mfix simulations will pick up the change. Be advised. Currently the override-source-files-in-project-dir feature only works for Fortran files, not C - I will fix this for the next release.

Taking this one step further, I tried simulating all 4500 particles using 100 cores on our cluster, with FPE checks disbled. I increased the particle spacing with the above respace.py script, and changed the grid slightly (6x150x3) so I could use a 2x50 DMP decomposition.

The simulation starts, but diverges with DT < DT_MIN

Setting DT_MIN to 0 just to see what happens … it decreases until it’s about on the order of 1e-11 and after that simulation time just slows to a crawl

Turning off the fluid solver (pure granular flow) led to a similar result, so the problem is not in the fluid phase.

So, I’m afraid the conclusion is - at this point, it’s not feasable to simulate 4500 non-spherical particles - the limit is not CPU speed but dt becoming vanishingly small.

I will follow up here if I find any solution to this.

Thanks for the report, SQP is a new feature and your feedback helps us improve it.

– Charles

Thank you for your help! @cgw
I will give it a try. I am trying to change the size of the bounding diameter to that of a non spherical particle with equal volume, as this ensures that the volume of the non spherical particle is equal to the volume of the equivalent sphere. However, it is obvious that the particles overlap with each other, and I am not sure if changing this parameter will affect the calculation results.

Go ahead and try it, but I think that will give a bounding sphere that is too small, and collisions will be missed. The bounding sphere is used as a first step in collision checks - if two particles are separated by more than the bounding diameter, then they cannot possibly touch, and the more expensive contact check can be skipped. If you deliberately make the bounding sphere too small, you will cause many collisions to be missed. It’s important for the bounding diameter to be accurate, which is why I think it should be calculated by MFiX and not specified by users.

Thank you for your advice. @cgw
If so, are boundary spheres or non-spherical particles actually calculated in mfix operation?

I noticed there are several issue of the settings:

des_epg_clip = 0.42 ! set it to 0.1, as this is non spherical particles, packing are much denser, this might cause the divergence with DT < DT_MIN

neighbor_search_n = 25 ! 2 reduce this might help converge
neighbor_search_rad_ratio = 1.2 ! add this to avoid missing collision neighbors.

use a coarser grid, i would recommend mesh size is about 2 bounding sphere diameter

1 Like

Thank you for Mr. Gao’s guidance. I will give it a try.

Thanks @gaoxi - that helps a lot!

With the changes you suggested, dt is now on the order of 1e-3 instead of 1e-11 and the simulation proceeds, albeit a bit slowly (estimated time remaining: 8 hours, to reach t=5.0

DT:

Simulation time:

Since I am using a coarser grid as you suggested, I can only use 40 DP cores. Perhaps a combination of SMP and DMP would help? (Is that supported? I suppose I’ll try it)

Here’s the updated project file:
N2D-LSFB.mfx (14.1 KB)

I’m going to let this run a bit longer and then upload a movie. Film at 11 :slight_smile:

The bounding sphere will be calculated if not specified. But due to limitations of the particle_input.dat format it is not possible to leave it unspecified :frowning:

If you are using automatic particle generation you can leave it blank. This is why the control in the Solids/Materials pane is labeled “Optional”

Okay, I understand what you mean.

I was able to simulate all 4500 particles with FPE overflow checks turned off, and Xi’s suggestions re: des_epg_clip, etc. DT stayed reasonable and I was able to reach 2.4 seconds of simulation time in about 12 hours, using 40 CPU cores (1x40x1), and a mesh of 5x120x2 cell, to make the mesh size approx 2x the particle size as recommended.

There’s a little glitch in the simtime plot, not sure where that came from.


Unfortunately after 12 hours the simulation crashed with:

************************************************************************
 From: DESMPI_UNPACK_PARCROSS: 
 Error 1000: Unable to match particles crossing processor boundaries.
   Source Proc:         5 ---> Destination Proc:         4
   Global Particle ID:         4450
 ************************************************************************

but at least this is a new error.

Here’s the movie:

A few interesting things: The particles drop and stack up perfectly in vertical columns. Then you can see velocity waves propagate upwards - the color scale is Y-velocity, upward moving particles are yelllow and falling ones are blue/purple - and the ones that are stationary are a lovely shade of turquoise (Specular highlighting used to achieve 3-d affect may shift the colors slightly).

The particle stacking doesn’t seem physically realistic to me (try stacking quarters vertically, on their edges!) but they were perfectly aligned when they dropped, so perhaps the force balance is just right to keep them in this unstable equilibrium. Or perhaps the upward gas flow stabilizes them. I am not sure. But it would be interesting to add a small random variation to the initial positions, or perhaps tilt the discs so they are not perfectly vertical.

After a while the gas starts to blow a few discs out of position, the stacked columns start to collapse, a “finger” on the right persists for a long time. After about 2s it seems clear that not much more will happen (despite one high-flyer, did it get flipped up by a falling particle?), the gas pressure is not sufficient to move this whole pile of discs, and the energy seems to be getting diffused before it reaches the top so the discs on top do not get blown up either. A big contrast to the 45-particle case where they levitate! Do you have an experimental setup which you can compare this to?

This has been a very good case for testing SQP and finding its limitations. Thanks. I have to move on to other things for now but please let us know if you find anything interesting.

– Charles

Thank you for your help! @cgw
I will try my best.

Hi all,

I think I am having a similar issue to this. I am trying to import input particle data for sqp particles into my rotating drum simulation see attachments. But get an error of too many columns (11 instead of 8) in the input particles, however when i remove the semiaxis information which are the three extra columns and run again an error pops up saying particle dimensions aren’t specified (as i have just deleted them).
Unsure as to why this does not read the data input correctly for sqp shapes.
Simulation currently just testing with 200 particles but want to scale up to 2000 once this works. I have lined up the geometry for the ‘loading’/input particle data with the rolling drum dimensions, but potentially this as an effect when trying to load particles in?
Any help would be greatly appreciated asap, unfortunately my diss won’t write itself.
3Freecad_140_30_3.stl (16.3 KB)
data_kf_0001.txt (56 Bytes)
particle_input.dat (37.2 KB)
rotating_drum_1.mfx (75.0 KB)

This is the error message:

Message from des/read_particle_input.f:100
Reading DEM particle input file version: 3.0
Detecting number of columns from the first line in the data.
Expected number of columns = 12
Error from des/read_particle_input.f:4243
Detected number of columns = 11
Error 1150: Incorrect number of columns in particle_input.dat file.
Please verify the data matches the following column layout:
Variable:             Column(s)
Coordinates (x,y,z):  1 - 3
Phase ID:             4
Diameter:             Not needed (uniform value)
Density:              Not needed (uniform value)
Velocity (u,v,w):     5 - 7
Temperature:          Not needed (Energy equation is not solved)
Species:              Not needed (Species equation(s) are not solved)
User scalars:         8 - 8

It is expecting 12 columns because you turned on user scalar tracking (one scalar).

You need to provide initial data for each tracked scalar. Here you need to add column 8.