Hello,
I have run DEM simulations on 3D hopper but values in the output .vtp files are in single precision (float32). How to get results in double precision (float64)?
Hello,
I have run DEM simulations on 3D hopper but values in the output .vtp files are in single precision (float32). How to get results in double precision (float64)?
Internal calculations in MFiX are all done in 64 bit mode to reduce accumulated roundoff errors. But in output files, it’s unlikely that more than 32 bits are really significant. What is your use-case for the 64-bit VTP files?
I want to test the capability of a machine learning algorithm (https://doi.org/10.1016/j.ces.2021.116832) to train on 64-bit data
Ok. It is unclear to me why 64 bit data for training would help. The digits that far to the right of the decimal point are essentially “noise” … you should not assume that the number of significant digits is the same as the number of displayed digits. Calculations are carried out internally in 64-bit mode to reduce accumulated roundoff/underflow, but that doesn’t mean all 64 bits are meaningful in the output.
Anyhow, if you really need 64-bit VTP files, you will have to change all of the code in MFiX that writes them. For starters, search for the string Float32
- you will have to change this to Float64
- but you will also have to modify all the code that writes binary data as well. I’m afraid we will only be able to provide limited help with this, since 64 bit output is not really a priority. This reference https://kitware.github.io/vtk-examples/site/VTKFileFormats/ may be of help.
As Charles said, I highly doubt double precision will add value. Further, most ML models are single precision anyway.
A lot of ML models even use 16-bit low-precision floats! Half-precision floating-point format - Wikipedia
How does the conversion of 64-bit data to 32-bit happen? Is it just by rounding to 7 digits after the dot? (e.g 0.1234567891011121 becomes 0.1234568).
The algorithm trains well with MFIX data. I found another solver - LIGGGHTS that produced 64-bit data I used for training, however, the training outcome was not good. Then, I tried rounding the 64-bit data to retrain the model but the results were still unreliable.
It’s similar to what you describe, but it happens in binary rather than base 10. Low-order bits are dropped, which turns into fewer digits when represented in base 10. Here’s an example, using the Numeric Python module (numpy) to illustrate:
mfix:) python
Python 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:04:59) [GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.pi
3.141592653589793
>>> np.float64(np.pi)
3.141592653589793
>>> np.float32(np.pi)
3.1415927
>>> np.float64(np.pi*10e10)
314159265358.9793
>>> np.float32(np.pi*10e10)
314159270000.0
>>>
A bit confused about this example:
import numpy as np
value = 3.2913442126190265e-15
np.float64(value)
3.2913442126190265e-15
np.float32(value)
3.2913443e-15
print(f'{3.2913443e-15:.22f}')
0.0000000000000032913443
This value is practically zero. if value = -1.96398497e-01, np.float32(value) = -0.1963985. The contrast between values is huge, I believe the ML model is not training well because of that.
For the first value:
>>> x = 3.2913442126190265e-15
>>> np.float64(x) - np.float32(x)
-8.613074137332671e-23
>>> _ / x
-2.616886469762145e-08
The absolute difference between the 64 bit and 32 bit values is on the order of 10^-23 and the relative error is on the order of 10^-8
Now for the second value:
>>> x = -1.96398497e-01
>>> print(x)
-0.196398497
>>> np.float64(x)
-0.196398497
>>> np.float32(x)
-0.1963985
>>> np.float64(x) - np.float32(x)
-3.7219238802066457e-10
>>> _ / x
1.8950877613929224e-09
The absolute difference is on the order of 10^-10 so I don’t understand why you say the contrast is huge. The relative error is on the order of 10^-9. There is no real physical significance to differences on that scale, this is not LIGO!