TFM wall velocity BC

Hello All, I started with 3D-TFM tutorial. I’ve imposed no-slip-wall-velocities for left, right, front, back walls such that a circular motion is to be developed inside the domain. There is no change in either solid or gas velocity profiles. The vector plots too, do not show any imposed velocity profiles. I want to attach some images, how can we do this? Is this a bug? How can we get this working in TFM framework?

I’m hereby attaching velocity plots for solid and gas phases for four different boundary conditions I tried.

(a) NoSlip with 1 m/s.
(b) NoSlip with 5 m/s.
(c) PartialSlip with 5 m/s, transfer coefficient 0.
(d) PartialSlip with 5 m/s, transfer coefficient 1.

A.png and B.png are imposed boundary condition example.
A
B

In all these, I’m not seeing any circular type of motion inside the region.

Could you please help me here?

Hello Team and Dr. @jeff.dietiker, could you please find a chance to look into this attached mfx file? More details are posted on this thread. Thank you.
tfm3d.mfx (11.5 KB)

That looks like a bug or at least a not user-friendly way to set a wall velocity. Please try the following for the four side walls:

  1. Set the Boundary type to “Mixed wall”.
  2. Set the wall type to “No-slip”.
  3. Set a non zero velocity component tangent to the wall.
  4. Save the .mfx file.
  5. Look at the .mfx file (from the Editor), and locate the settings for these BCs. They should be set as bc_type()=‘PSW’ and there should not be any bc_hw_g() defined.

and let us know if this helps.

Hello Dr. @jeff.dietiker, thank you for your reply. I’ve set up a new case, coarsened cell width to double-size, and here are my observations.

A. As suggested, when I use Mixed wall → No-slip → tangential velocity, BC works, the direction is also correct. When I see case setup in an editor, bc_type(1, 2, 3, 4)=‘PSW’ and there is no mention of bc_hw_g. Please see images on attached slide for your reference.

B. Would this BC setup work on an STL boundary (say, stepped cylinder)? If yes, how can we ramp up this velocity linearly with time?

C. I’ve set 5 m/s and time to 20 sec for steady-state. All time-steps until 4.8726s took ~15 inner iterations and reached E-04 residual. After this point, each time-step took max iterations=500 and residual never went below E-03. How can we avoid this?

Less important:
D. Whenever I include more than one output VTK, only the last VTK file is written out. How can we avoid this?

Thank you.

Hello Team, Dr.@jeff.dietiker, Greetings.

1. After my last post, I’ve set up a TFM case with all STL boundaries. These STLs form a stepped cylinder. I’m applying Mixed wall → No-slip → velocity [U, W] components such that the velocity vector is tangential. The boundary condition type shown in the editor is ‘CG_PSW’
→ Could you please confirm whether this is correct?
→ Gas/solid-velocity vectors, more or less, show the expected circulating flow near the boundary but not for all the boundary cells, as shown in the attached image. Is this a bug?

2. Particle size is 150 microns and the gas velocity is around Umf. I’ve tried cell sizes with 10, 20 times the diameter but could not get the run past 0.2 sec. The case diverges/stalls due to negative void fraction/density. This sometimes happens right in the first few steps.
→ What is the best ratio between particle diameter and cell width for MFiX TFM simulations?
→ MFiX generates cut-cells using the STL surface? Are there criteria for the number of triangles?
→ How can we make this set up stable and run up to the steady state?

Thank you.

  1. The wall velocities are global to the entire wall. Did you spilt your STL into many elements along the circumference? The velocity vectors are tricky to visualize because they are staggered (each velocity component is at a different location in a cell, the face centers). There are some cut cells that don’t have 3 components and will look weird.
  2. The rule of thumb is to try to be around 10 particle diameters for the cell size. The only criteria for the number of cells (standard and cut cells) is it has to be affordable, i.e., run in a reasonable time. What you want though is to avoid bad cut cells. This may be the cause of your instabilities. I usually play with the cut cell tolerances and looks at the mesh statistics to avoid too many small or high aspect ratio cells.

Hello Dr., yes, I’ve split cylinder into 12 equal surfaces as shown in figure. Each surface has its own tangential velocity.

1. If you suggest 12 is low, should I increase them to 24 or 36?
2. There is no time limitation and I have up to 24 CPUs. Can I go as low as 1 or 3 particle diameters as cell width? I’m looking to capture a tori flow around this cylinder.
3. “Mesher” settings are as shown in figure. What parameters can be altered to improve cut-cells for this geometry?
4. I observe that when I put more than one VTK file write-out, only last entry is written out as run progresses. Can this be a bug?
5. I’ve used STL files and when mfx file is opened in GUI on cluster, STL information is masked out as shown below. It is reading facets but does not produce any cut-cells. How to address this situation?


Thank you.

1 . This is fine for now.
2 . There must be some amount of time you are willing to invest on this project. My guess is if you have more than 500,000 cells, it will take a long time to run on 24 cores.
3 . I usually increase the snap tolerance, small cell and small area, and the normal distance tolerances.
4 . A vtk region must be a volume (in 3D), otherwise there won’t be any cells in it and the file won’t be written.
5 . Not sure. Is the STL file oriented correctly?

Hello Dr., greetings. Thank you for your reply. Could you please let me know your thoughts on these questions?

1. I could run a bit more time-steps than last time by making the following change: Normalization for fluid, solid is blank. Changed it to 1.0 [after a forum post]. Does this has any effect on the solution?
Also,
A. Checking/unchecking stall and wall time has any effect on the solution?

2. msh (and also VTU) files are written out in binary format. Can we write them in notepad readable format?

3. Currently, using default drag (S-O’B), Model A, Turb-off, Algebraic viscous, Schaeffer. To capture a tori flow around this cylinder, could you advise me here for more appropriate choices? Also, I think, results would improve when a higher-ordered discretization is used.

Thank you.

  1. This changes the way residuals are normalized. I usually set this to zero, as it helps reach the convergence criteria faster. It should have minimum effect on the solution as long as the convergence tolerance is low (say tol_resid=1E-4)
    A. checking “Detect stall” doesn’t affect the solution, but it may help in case on non-convergence by decreasing the time step before going through the max number of iteration per time step. Checking “Enable max wall time” doesn’t affect the solution, it provides a mechanism to do a clean exit after the provided wall time. This is meant for people running MFiX in a queue system when we don’t want to abruptly terminate the simulation when we reach the end of the queue allocated time.
  2. No these can only be written in binary format. Paraview may be able to export them in ASCII format.
  3. These are good settings for now. It is typical for CFD practitioners to explore various settings and see how they affect the solution, so you might do this if you have time and interest.

Hello Dr.@jeff.dietiker, greetings. Thank you for your feedback. I could run a case with a cell width of 20d upto 20 sec.

1. I’ve tried 5 diameters, 10d, and 20d and my observation is bigger the cell size fewer the bad cells. I changed mesh settings as saw in previous post but it did not affect a bit [I’m not sure why]. But mesh must be refined for better results. In this setup, a circle is being cut out from a rectangle grid, could anyone from your team take a look at case?

2. I’ve submitted this 20d case for 12 hours on computing node. The case reached 20s in 5 hours only but job did not exit node for full 12 hours. How to avoid this?

3. I’ve tried again to write out VTK data on inlet patch but all in vain. I’ve tried (a) using “defined” inlet patch and (b) created a new 3D slice at inlet patch in VTK. I want to confirm the applied boundary condition, measure pressure, gas velocity. Please find attached image for your reference.

4. Solid volume fraction for initialization is 0.55 whereas Packed bed void fraction under TFM is 0.42. A tapped bed, as done experiment, is to be fluidized in simulation. Could you please let me know appropriate values for these?

5. Under Advanced, the attached variables are being created automatically. Please see attached image for your reference. Does this effect the solution?

A
B

Thank you.

  1. Feel free to share your setup. We will take a look when we get a chance.
  2. Not sure, check with your HPC support. If MFiX exits cleanly, you should get out of the queue.
  3. As I mentioned earlier, a 2D vtk region in a 3D domain won’t produce a file. You can set vtk_nys(VTK_region) = 3 and this will create 3 layers (slices), one at y=ymin, one at y=ymax and one in the middle. These are 1-cell thick layers.
  4. This value should be provided by the experimentalist that conducted the experiment.
  5. No, this is probably a side effect of using partial slip but it won’t make any difference.

Hello Dr.@jeff.dietiker, greetings. Just a reminder, I’m using TFM to capture tori flow in an annulus gap when a 150-micron glass particle bed is fluidized. Rotating inner cylinder is made up of 12 STL surfaces.

Few observations over weekend:
A. I tried 20D, 10D, and 5D cases with [Gidaspow, Algebraic, Schaeffer, FOU, norm(s)=0.0]. Let us call this a base case.
1. 5D case couldn’t even progress up to 0.05 sec in 12 -hours on 24 CPUs (2 X 6 X 2).
2. 10D, 20D case progress was okay. But few time-steps have “Run diverged/stalled” OR “negative void fraction/density” and simulation does not progress in time. How to avoid these?

B. To this 20D base case [changed one at a time]:
a. Changed Algebraic to Lun and progress was very slow. I think, possible reason is due to solving of additional granular temperature governing equation.
b. Changed FOU to SuperBee and progress is very very slow or case diverged.
c. Changed FOU to SMART and progress is very very slow or case diverged.

For time being, I’m going back to 20D base case and target tori flow.
I. Run log writes “The preconditioner for the linear solver MIGHT NOT be very efficient with DMP partitions in the y-axis”. How can we improve this for better parallel computing, if it is affecting speed?
II. Run log writes
maximum 21866 1
minimum 21866 1
average 21866 -N/A-
MAXIMUM speedup (Amdahls Law) = +Inf

Also
maximum 4335 1
minimum 3840 14
average 4083 -N/A-
MAXIMUM speedup (Amdahls Law) = 8.3333333333333329E-002

How to understand and make use of this information, if there is room for parallel computing improvement?

III. 24 CPUs as (2 X 6 X 2) is used for 41250 Cartesian Cells (also 330000 for 10D). Is it a good idea to perform CPU scaling study to understand computing versus communication time?

IV. What are BLOCKED CELLS? They are ~51.20% of them.

V. Each STL has velocity, for example, bc_uw_g(1) = (23.141625/60000)cos(3.14160/180)(400) and so on for bc_ww_g(1), bc_uw_s(1,1), bc_ww_s(1,1). I’m trying to ramp velocity as bc_uw_g(1) = (23.141625/60000)cos(3.14160/180)(400+4.0*time) but custom solver building gives an error as Error: Function ‘bc_uw_g’ at (1) has no IMPLICIT type

VI. I’m using mpirun --oversubscribe -mca mpi_warn_on_fork 0 -np 24 /LOCATION/mfixsolver -s -f /LOCATION/V_38_25.mfx. How to resume simulation from last saved point on terminal?

Apologies for wall of text, but please help me here. Thank you.

A1. This is to be expected. Finer mesh means more cells an usually a smaller time step, so it will runs slower.
A2. This is fine, you will encounter some instabilities. You should see the time step going down until it recovers and then the time step will gradually go back up.
B.a. Correct.
B.b and c. This is to be expected. Higher order schemes take longer to run.
I and II. You can ignore this, the messages are not useful.
III. Yes.
IV The overall MFiX mesh is structured and covers a rectangular domain. Since you are using non-rectangular geometry with cut cells, there are fluid cells (where you solve the governing equations) and cells that do not take part in the computation (cells that are outside the fluid region). These are called blocked cells.
V. If you want to access a variable in a UDF, you need to use the corresponding module. Here it is the “bc” module. Near the top of the subroutine, add

use bc

Take a look at the source code files to see how the modules are used.

VI. You need to set the run_type to ‘restart_1’:

run_type = ‘restart_1’

Hello Dr.@jeff.dietiker, greetings.

1. I’m hereby attaching basic setup [cell width = 20D, Gidaspow, Algebraic, Schaeffer, FOU]. It has usr1.f which intends to change velocities on 12 STL surfaces with time. Could you please take a look?
A. No error while building solver but how to ensure UDF is working? [I think, it is not working].

2. Could you try for 10D? For few time-steps.
A. For 10D, I observed, case progressed upto 0.1086 secs in 127 CPU secs, written out core.434542 (1.57GB). After this, job remained in queue for next 12-hours and did not progress not even a single time-step nor killed giving an error message. How to avoid this? The same case on local computer runs fine.

3. Could you also please take a look at the overall setup? Model setup, Solids, IC, BC, Discretization, Linear solver, Preconditioner. I’m trying to capture monodisperse tori flow described in “Taylor vortex analogy in granular flows”

V_38_25.zip (1.5 MB)

Thank you.

Please double check your wall regions for the inner walls. The region name doesn’t seem to match the stl file names.

Hello Dr., naming is mismatched but orientation is correct, I believe. This is because STLs are made in SolidWorks, following by Re-meshing in Paraview and then read-into MFiX. In this process, somewhere there is a change in coordinate system (only in theta direction). That is, in MFiX all STLs were off 240 degree. So 000Deg named boundary is linked to that STL which is normal to Z and so on. I have double checked. Did you find a chance to look into other aspects? Thank you.

Hello Team, Dr.@jeff.dietiker,

1. I’m unable to run basic setup which is attached with above reply. The case diverges within few time steps. Could you please take a look? I’m looking to compile attached usr1.f if the case becomes stable.

usr1.f (6.2 KB)

2. Most times, simulation(s) proceed a few time-steps (say 0.1086 secs in 127 seconds) and saves file(s) “core.” which are ~2 GB. Job remains in “run-state” for next 12-Hours( node requested time) without giving any error. Attached image is for your reference in which run log is not updated after core.* is written out. When I run the same job on my local computer, everything is okay. How to avoid this?

3. For a moment, let us assume, case is diverging due to bad cut cells. I created parts of a circle as staircases such that steps exactly match with mesh lines in MFiX. When mesh is generated, I still get cut cells with high aspect ratios. Why is this happening?

Thank you.