See Superquadric_visulization_paraview_plugin.xml
in the fluidized_bed_superdem
tutorial
Thank you, @cgw and Dr. @jeff.dietiker. Due to some reason, I’m unable to setup “cyclic boundaries” in SQP simulation. Is this not yet available for this model? I’m using latest version which is 23.41.
In addition to Jeff’s comment about SQP particle volume I’d like to add:
Depending on what you are doing with the data, it might be easier to save the volume using “monitors” than VTK files -
But if the particles are all the same size and are non-reacting the volume is a constant and saving it this way is overkill. You can just compute it from the shape parameters - the computation can be seen in the function SQ_VOLUME
in model/des/sq_properties.f
A good reference is:
Jaklič, A., Leonardis, A., Solina, F. (2000). Superquadrics and Their Geometric Properties.
formula 2.60 https://cse.buffalo.edu/~jryde/cse673/files/superquadrics.pdf
A simple Python version (from mfixgui/ics.py
):
from math import gamma
def B(a,b): # Beta function
return gammma(a)*gamma(b) / gamma(a+b)
e1, e2 = 2/n, 2/m
V = 2 * a*b*c * e1*e2 * B(e1/2 + 1, e1) * B(e2/2, e2/2)
The GUI actually computes the volume internally (for estimated inventory/particle count) but the number is not directly displayed - it might be a good idea to add a display of the computed volume to the Superquadric popup, like we do with the bounding diameter
(unfortunately, the bounding diameter cannot be used to determine the volume.)
Can you describe what difficulty you are having with the cyclic boundary? Using MFiX 23.4.1 I was able to add cyclic boundaries to your aSph_A_32.mfx
in all 3 directions:
Running the case I got some runtime errors about grid partition being too fine / too coarse but after adjusting desgrid_imax/jmax/kmax
the case ran for me
Hi @cgw, I’d low prioritize periodic issue – I see CELLSIZE or DES errors when I make one direction as cyclic, whereas, same case runs without any errors when its all-walls – I’d try this again later this week.
Over weekend, I setup test cases with fewer particles on HPC: I setup, check everything is okay on local machine and transfer only mfx file to empty folder on HPC. After “module load openmpi5”, I issue
mpirun --use-hwthread-cpus -mca mpi_warn_on_fork 0 -mca mca_base_component_show_load_errors 0 -np 16 /insomnia001/depts/cboyce/users/js5521/MFiX2341/installLocation/envs/mfix-23.4.1/bin/mfixsolver_dmp -s -f ./VIB_780_10X_5Y.mfx
Question: Wanted to run it through you if any of the flags can be improved with respect to speed?
Question: I’m including usr file and have to build a solver. If I share my HPC details, would you help me in choosing between build_mfixsolver --batch --smp or build_mfixsolver --batch --dmp? I reached out to HPC support but could not get useful information. The modified command I used is
mpirun --use-hwthread-cpus -mca mpi_warn_on_fork 0 -mca mca_base_component_show_load_errors 0 -np 16 ./mfixsolver -f ./VIB_780_10X_5Y.mfx
Best,
Hi @jagan1mohan
I want to point out that running the batch solver keeps you from taking advantage of certain features of MFiX: monitoring from the GUI, pausing/resuming simulations, adjusting parameters during a run. I’m not sure why people prefer the batch solver …
Also note that the Conda packages include OpenMPI 5.0 so if you use the Conda MFiX installation you shouldn’t need module load openmpi5
.
You should use SMP if you have one machine with multiple cores and you want to use them all. You should use DMP for running across multiple machines on a network. DMP can also be used on a single multi-core node but the communications overhead is higher than with SMP, which is able to exploit shared memory. On the network there is no such thing as shared memory so you need DMP/mpirun. In theory DMP and SMP can both be enabled (you have to compile yourself, we do not ship a DMP+SMP enabled solver) but this is a slightly unusual configuration and is not well-tested.
Again, there are no magic bullets with compiler flags, etc. The solver we ship is built with a high level of optimization and targets modern processors. You could play around with -O
and -march=
flags but you are not likely to get more than a few percent improvement. The low-hanging fruit has been plucked & eaten already.
– Charles
Hi @cgw and Dr. @jeff.dietiker, thanks for responses above. I could run test case with “okay” particle count for 24 hours using SQP and progressed 0.25 sec. This is all-walls, mere fluidization. Particles are initialized in entire domain and in first few time steps, they descend under gravity before reaching bottom boundary.
To this setup, I included usr1.f with just one line: GRAVITY_Y = -9.81 + (4.5/1000)4PiPi55sin(2Pi5*time) and rebuilt solver using build_mfixsolver --batch --dmp after checking call_usr. This is the only change.
The same case now progressed only 0.02 sec in 24 hours.
Could you please help me what’s wrong here?
All simulations are going to have this oscillating gravity.
Is there a way to include this without user-defined-files – show as input in GUI?
Is it possible to edit a copy source code that gravity always has this oscillation, say 5 Hertz and 4.5 mm amplitude, as default? I can manage to keep multiple copies of solvers for each of frequency and amplitude in different folders.
@cgw : I’m okay to try GUI method of running cases – HPC team enabled this. How to pull-up slurm scheduler options in GUI after building solver, say DMP or SMP? Such as time request?
Best,
Hi @jagan1mohan
-
The same case now progressed only 0.02 sec in 24 hours.
Could you please help me what’s wrong here?
It seems rather odd that adding this oscilating gravity term would cause the simulation to slow down by a full order of magnitude. Can you upload a bug report ZIP with all files so I can take a look?
- Is there a way to include this without user-defined-files
No. All GUI inputs are constants. (MFiX 24.2 (to be released) will have some support for linear ramps on mass inlets (ramping up flow rates, etc) but this is a very limited facility and will not apply to gravity, nor will it support sinusoids.)
- Is it possible to edit a copy source code that gravity always has this oscillation, say 5 Hertz and 4.5 mm amplitude, as default?
Yes, this should be possible.
- How to pull-up slurm scheduler options in GUI after building solver
See share/mfix/templates/queue_templates/slurm
What do yo mean by “Okay” particle count? SQP is much slower than DEM so it is possible that what seems like a reasonable particle count for DEM is going to be much more costly for SQP. If you start with SQP particles far from each other, there won’t be any collisions initially and the code will seem to run fairly fast. As the bed settles and collisions start to occur, the code will slow down.
Hi @cgw, Dr.@jeff.dietiker, following up on periodic boundary question, I tried to setup periodic/cyclic boundaries but GUI generated attached bug report for your perusal.
per_2024-03-07T032503.124034.zip (47.0 KB)
(A) Could you please take a look at this?
(B) In setup, if dt_max = 1e-1, dt_min = 1e-6 and vtk write-out time interval is 0.001, is it that time-step will never drop below 0.001, to honor this write-out request, even though solver becomes stable for higher time-steps?
(C) How can we detect particle-loss from domain due to setup inaccuracies, say, high time-step?
(D) Currently, “Search grid partitions” are 10, 50, 10 and it says “optional”. What would happen if I do not enter any entries for all three? From tool-tip, I think, these are automatically adjusted to three times of particle-diameter? At the moment, if I give cell width less than 5 times of particle-diameter, GUI shows up an error.
(E) Default, Tcoll/DT_Solid is 50. My simulations are highly dense, at very low velocity and slow moving material. Can reducing this to, say 25, increase overall time-step of simulation?
Thank you,
Hi @jagan1mohan
These are all good questions and we’ll try to answer them. However, asking 5 questions in one posting is a lot, especially when it’s added to a long thread that started off on a different topic. In the future, please try to make new threads for new questions. I’m going to answer these separately here
A) The GUI traceback:
Error: local variable 'row' referenced before assignment
File ".../model_setup.py", line 223, in set_solver
self.setup_solids_dem_tab()
File ".../solids_dem.py", line 912, in setup_solids_dem_tab
row += 1
This is a small bug in the GUI setup code that is fixed in the upcoming 24.1 release, however you can fix this now if it’s causing you trouble:
Go to the directory $CONDA_PREFIX/lib/python3.10/site-packages/mfixgui
Edit the file solids_dem.py
(after backing it up), make the following change around line 854
diff --git a/mfixgui/solids_dem.py b/mfixgui/solids_dem.py
index 2d9d51ffb..28e8b703d 100644
--- a/mfixgui/solids_dem.py
+++ b/mfixgui/solids_dem.py
@@ -854,6 +854,7 @@ class SolidsDEM(object):
ui.combobox_des_conv_corr.setCurrentIndex(DES_CONV_CORR_VALUES.index(val))
solids_names = list(self.solids.keys())
+ row = 0
if self.solids_dem_saved_solids_names != solids_names:
for (key, gb) in (('e_young', ui.groupbox_young),
@@ -878,7 +879,7 @@ class SolidsDEM(object):
# ...and make new ones
for (p, name) in enumerate(self.solids.keys(), 1):
- row = p
+ row = p-1
label = QLabel(name)
#label.setObjectName('label_%s_args_%s' % (key,p))
#setattr(ui, label.objectName(), label)
(you can use the ‘patch’ program to apply the above diff but it’s probably easier to do it by hand, inserting one line and changing another one)
B) Time-step vs VTK_DT
: Is it that time-step will never drop below 0.001, to honor this write-out request, even though solver becomes stable for higher time-steps?
It is easily seen that this is not the case:
Hi @cgw, I will try to fix this bug and revert you back for any questions. I’ll keep this ticket for the periodic setup issue and re-open new ones for other queries, I have. Best,
Hi @cgw, this ticket is regarding periodic boundary conditions in SQP model.
I could not find mfixgui in your said location – please find attached image above. I found solids_dem.py in another location: /miniForge/installLocation/pkgs/mfix-gui-23.4.1-py_0/site-packages/mfixgui
It is confusing to implement your code-edit: I have made a copy of this original file, could you send me the corrected solids_dem.py file to the attached file? I’d replace it, in interest of time.
solids_dem.py (59.9 KB)
Also, what is the date for 24.1 release?
Best,
Hi Team, @cgw, good morning. Did U find a chance to look into this request? Eagerly waiting for your reply. Thank you,
You do not have to address support requests to me personally. I read every posting on the forum. And we encourage users to help each other and answer each other’s questions.
Also, please wait more than 24 hours before following up. We have a lot of users with questions and only limited resources for support.
MFiX 24.1 should be released before the end of the month, I expect it to be March 28.
In the interest of my time, I cannot edit source code for you. I showed you the changes that need to be made. If you cannot handle this, wait for the next release.
Note that editing the files in miniForge/installLocation/pkgs
will have no effect. You have to edit the file in envs/mfix-XX.Y
as I specified.
Please do this:
$ conda activate mfix-23.4.1
(mfix-23.4.1)$ cd $CONDA_PREFIX
(mfix-23.4.1)$ find . -name solids_dem.py
That should print the file you want to edit. You just have to add one line (row = 0
) at line 857, and change one other line (row = p-1
) at line 881.
Hi, I edited 857 and 881 lines–GUI did not give errors while setting periodic in X. I’d test particle moving across periodic boundaries. Best,