And I am also running MFIX-Exa-DEM cases. It runs without problem with CPU, but failed at GPU
inputs.dat (62.2 KB)
And once the GPU case started, it was stuck at
I donât see how the provided setup could run. For starters, the âside inletâ in the geometry is open so it should be a mass inflow â not an âebâ boundary condition â and therefore it cannot be used with DEM particles.
As I always suggest, start simple and add complexity to the setup in a systematic approach. You will never figure out whatâs wrong if you throw everything into the initial setup and it doesnât run.
Attached is your setup, simplified for non-reacting flow. Iâve tested this setup on both CPU and GPU.
dem-debug.tar (20 KB)
- I made the domain a little taller (0.43 m â 0.48 m) so that the width, depth, and height and all nicely divisible by 24-cube grids. âNice gridsâ make it easier on the MLMG solver.
- The side inlet â as defined in the original inputs with the provided CSG geometry â will not work for DEM. I papered over the inlet to get the case to run, but the geometry has to be modified if you want to bring DEM particles into the domain there.
- You used a fairly stiff spring constant at 2.5K. I reduced it to 25. which works for as much as Iâve tested. Youâll have to live with long times to solution if you roll back to 2.5K as the DEM time step is around 1.e-8 seconds.
The below figure shows a few different âgridâ decompositions. Note that âmoreâ isnât always âfasterâ and again, a single grid on a single GPU is about 5x faster than the best preforming CPU case.
Does DEM and PIC need a different way as a side inlet? EB close end tube for PIC ? Open tube for DEM? @jmusser
And one thing to double check, the experimental setup is 0.43 m and I need the correct height for the residence time of both the particle and gas. In this case, is there another way to make the geometry easier on the MLMG solver?
Correct.
- PIC parcels can be added during a simulation through mass inflow (mi) and embedded boundary (eb) boundary conditions.
- DEM particles can only be added to a simulation through an embedded boundary (EB) surface.
The reason for this is because PIC parcels are placed inside the domain whereas DEM particles are placed outside the domain and pushed through the boundary over several DEM sub-steps. While entering, DEM particles are shuffled around, ignoring collision forces, until they have fully entered. This is done to prevent excessive overlaps between existing and entering DEM particles.
For a little additional context,
- mass inflow boundary conditions are defined on domain extents â the faces of the bounding box that defines the physical simulation space
- embedded boundary inflows are areas of the system geometryâs surface where fluid and/or particle inflow is defined. We sometimes refer to this as flow-thru-eb to distinguish this from a mass inflow.
If we were to place entering DEM particles outside the domain in the same manner that we do for the EB, they would simply get deleted because AMReX currently does not allow particles to exist outside of the domain.
One approach that is not necessarily intuitive, is adding more âdead spaceâ around the geometry by increase the domain width and depth can provide additional flexibility in defining grids that work well with the MLMG solver.
You can also play around with the cell size. However, MFIX-Exa currently requires the cells to be uniform (dx == dy == dz
). There are plans to remove this limitation moving forward but this activity has not yet started.
@jmusser I am building the GPU case based on your input file by adding the setting one step by one step. Here I have 2 cases,
- If I only have one species for gas (N2), sand (Sand) and biomass (Ash), it runs. -->inputs-1
- But as I added all the species into the inputs, it canât go through. â input.
BTW, I used a different CSG file for the DEM case here (for the CPU runs I mentioned also) .
Please help me to check it.
files.tar (70 KB)
What does âit canât go throughâ mean?
Jumping from one species to 30 is not an incremental change.
Does the species number cause this?
Should I increase the species number gradually to figure out the cause?
And any suggestion on the possible cause of this? Thanks!
This has the hallmarks of a GPU out-of-memory issue. One way to test is to add a few species to the fluid and see what happens. If your setup runs, add a few more and test again. If youâve added all the fluid species and it still runs, start adding species to the particles. If it is an issue of running out of memory, youâll eventually find the tipping point where the simulation crashes.
Now letâs take a look at your inputs file. Your file has the following entry:
amrex.the_arena_init_size=7516192768
This tells AMReX how much GPU memory to allocate per MPI task. Specifically, your setup will utilize ~7500MB of GPU memory. If youâre running on one of Joule3âs NVIDIA H100 GPUs, thatâs less than 10% of the GPU memory available!
You can remove this setting from your inputs file and let AMReX use the entire GPU if you are running one MPI task per GPU --which is what we strongly recommend. The AMReX docs contain a lot of information on memory management.
The inputs amrex.abort_on_out_of_gpu_memory = 1
is also handy when debugging suspected GPU memory issues. As you may guess, AMReX will abort if you run out of GPU memory.
If youâve tried all that and you still run out of GPU memory, then you need to split the problem up and use several GPUs.
Thanks for the suggestions. I did found the tipping point where the simulation crashed, with
amrex.the_arena_init_size=7516192768
commented out.
Tried to run the case with 4 GPUs, with all the species included and non-reacting (didnât include the mfix_usr_reactions_rates_K.H ), the simulation could run, but only with a very small number of sand particles.
So in this case, there are two solids: sand and biomass. Sand only has one species (Sand), while biomass has 30 species. In the code, do the sand particles also have the biomass species information saved?
All particles track all solids.species
as defined in the inputs.
Please define what constitutes a very small number of sand particles.
Iâm guessing itâs how you are decomposing the domain that is preventing you from making efficient use of the hardware. For example, if you divide the domain into 8 24-cube grids and run on 8 GPUs, you may not see much if any benefit because the majority of the particles will be located in the bottom one or two grids.