MFIX SQP Bug Report

Hi Team,

I’m trying to setup a SQP (Superquadric Particle Model) to fluidize a few particles in a thin fluidized bed. The thickness along Z - direction is very small as compared to dimensions in X and Y directions. I initialized 359848 particles and each individual particle’s shape looks like football. I’m using MFiX-23.2 in spite of many attempts, I could not go past beyond one time-step.

  1. Please find attached HPC output file in the compressed folder in which it clearly shows that solver building is successful followed by particle initialization. After this, the job remains on node for hours, without any progress. This we have tried up to 128 CPUs starting from 8 CPUs in product increase of 2 but all in vain.

  2. The attached folder also has job submission file and. mfx file for your perusal. Could you please take a look and suggest what’s going wrong?

Folder
toNETL.zip (11.6 KB)

Thank you,

First thing to try is to radically reduce the number of particles. Try 10, then 100, etc.

Hi @cgw, thank you for your reply – the simulation progress forward for fewer numbers of particles such as 100 – but that’s not going to match up with my experiment data. How can we simulate at least half of the particles I mentioned in my original request – I’ll try to come up with periodic or cyclic boundaries to reduce the count. Thank you,

The existing SQP code is not going to scale well to half a million particles. The idea of trying with fewer particles is to give you an idea of the scaling behavior. This is very new code and we’re still learning its limitations. Unfortunately I don’t have a “silver bullet” to make your SQP simulations run faster.

Another thing to try would be a large number of spherical particles… expect the SQP code to be at least 10x slower than that.

Okay, thank you for the information @cgw. What according to you might be a good maximum number of particles to try using this model – I have some room to play around with particle diameter, system dimensions to reduce the count. All simulations are mere fluidization for different gas flow rates – no reactions, no cut cell and I have access up to 128 nCPUs.

This is really an experimental question, depending on the performance of your system and how much time you are willing to spend. You’ll have to experiment with differerent numbers of particles to get an idea of the scaling behavior. I do not currently have access to 128 cores for testing (our HPC cluster is undergoing maintenance). Let us know what you find!

okay @cgw, I’ll try something this weekend. Did you find a chance to look into job submission script? Could you let me know if there’s a room for improvement to speed up a simulation.

Again, no silver bullets…

I looked at the submission script and it looks mostly reasonable. But where is that configure script coming from? MFiX uses cmake not autoconf so I don’t know what that is.

Also, I don’t know anything about your cluster so I don’t know if -march=znver3 makes sense. The build_mfixsolver internally sets -march=haswell because we found that was a good compromise between portability and performance. You can always try -march=native and -O3, this might get you a few percent improvement. But I’m not at all clear how the flags from configure are getting passed to build_mfixsolver. Also note that if nodesi/nodesj/nodesk are specified in the ,mfx file you don’t have to pass them on the command line (this of course will not affect performance)

– Charles

okay @cgw. Could you please share what a job submission script looks like at your end? I’ll try to use both of them and see which is giving me better turn-around.

Job submission to the queue is really a local issue and you should solicit help from the local experts who know the details of your setup. I primarily use the MFiX GUI which has its own built-in job submission templates, see the directory queue_templates in the MFiX distribution (these are pretty generic). I also use the prebuilt solvers which come with the distribution so no compilation is necessary.

Significant speedup will not come from tweaking the job submssion script, but by changing the model - reducing tolerances, increasing the time step size, reducing the number of particles, etc.

okay @cgw. In order to run something in parallel, is there a need to generate a new solver? There are no user-defined sub-routines etc. I’m not sure why this was suggested to us. If I want to run setup in parallel, what changes should I make to my job submission script?

mfixsolver_dmp is the DMP solver bundled with the distribution (it lives in $CONDA_PREFIX/bin)

(Earlier versions of MFiX did not come with prebuilt solvers, which is probably why you were told you had to build your own). If you launch jobs from the MFiX GUI this will be selected for you.

Of course, you can compile your own solvers if you like, if you want to experiment with compiler flags, etc.

You get a DMP-enabled solver by running build_mfixsolver --dmp which you were already doing. That creates mfixsolver_dmp

Oops, I now see you are running MFiX-23.2 not MFiX 23.4. The prebuild DMP solver was added after 23.2 . So if you are using that version, you will need to recompile the solver (build_mfixsolver --dmp as you have in the submission script). If you upgrade to 23.4.1 you can skip this step and just use mfixsolver_dmp.

You should upgrade anyway, there have been some bugfixes for SQP since the 23.2 release.

the cluster does not have gui option–its all via command line. What’s the command like to run a setup in parallel? @cgw thank you,

I’m not quite sure what you mean by “run a setup in parallel”. You are already running in parallel.

You need a DMP-enabled solver - either mfixsolver built with build_mfixsolver --dmp (as you are doing) or the prebuilt mfixsolver_dmp from MFiX 23.4 - and you need to use mpirun to run the job (as you are doing) with NODESI/NODESJ/NODESK set appropriately (as you are doing) - I’m not sure what else you are looking for!

Hi @cgw, moving further on your suggestions about speeding up SQP simulations, please find below two images with this post
MFiX_A

→ “Solids normalization” is empty. What is the most appropriate value if there’s a room to speed up simulations?

MFiX_B
→ Default Linear Solver are shown. What other linear solvers can speed up simulations? Not shown here but all Discretization are First Order Upwind.

→ Are there any other tolerances in setup which could be adjusted?

→ How can we write out or calculate volume of SQParticle? I have same shaped SQParticle all over the domain?

→ I’d upgrade from 23.2. Besides bug fixes are there any speed related enhancements in 23.4?

Best,

The solids normalization only applies to TFM, it won’t help with SQP runs. With SQP simulations, the bulk of the time is spent in the SQP loop, not the fluid loop, so adjusting the linear equation solver will have little effect on the speed.

Please try 23.4, there is an option to save particle volume in the vtk files.

Thank you, Dr. @jeff.dietiker. Will use the latest version. The shape visualization on MFiX GUI is good – is there a way where I can see these output files loaded into paraview, showing same shape and post-process further? Best,