I have installed the most recent MFiX version on a Fedora VM, but now for the first time experience problems building the solver in the GUI. It’s been a while since I did the last time. Before I have loaded the system mpi module using “load module mpi” prior to launching mfix. But now it seems the solver would like to use only packages withing the miniconda environment, I believe this is the error now. Because it works when I don’t load the system mpi like this. It also feels like the mpi builds much slower than what it did before, and the same goes for when running the solver. Is this really so? Does this have to do using miniforge and not the previous Anaconda installation? I cannot recall any other big changes, other than now a higher Fedora version number.
Thankful for advice on this.
Hi Kjetil
I’m not 100% clear on what you are doing, but If you want to use a different MPI than the one bundled with the Conda packages, I think the best thing to do is to remove the Conda packages from the MFiX installation.
$ conda activate mfix-24.4.1 # your version here
$ conda remove --force mpi openmpi
Make sure you use --force
otherwise the whole MFiX installation will be removed (conda will remove everything that depends on a package, while removing that package, unless --force
is specified)
You can always put the packages back with conda install
I don’t have enough information to speak to the compilation and runtime speed. It’s also not clear why you are doing this in a VM, that may be part of the performance issue.
Please let us know if this is helpful or if you have further questions,
– Charles
Thanks! Well, historically this was to have the combination of 1) mpi/parallel runs, and 2) running in a Windows environment (at work). Is this still not the case, that DMP is not available under Windows?
Correct, DMP is not available under Windows. I have not tried DMP in a Linux VM on Windows but honestly I would not expect great results with that setup. It would be better if you could run on a Linux host.
Can you describe your setup? Is this VM part of a cluster or are you using DMP just on a single host?
You are doing module load mpi
in this VM environment? Did you set up environment modules on your VM? This is typically done on HPC clusters, not single hosts. The Conda packages include all you need so this should not be necessary. Some clusters have a specially optimized MPI which is tuned to the cluster fabric and require use of that MPI rather than the OpenMPI included with MFiX, but I don’t think this is relevant in your case. Can’t you just use the bundled MPI and pre-built DMP solver which is linked against that?
Oh, I have had very good experiences with such a setup in the past, for several years. I have found it very snappy and agile (by looking at the iterations at least), more so than many “common” CFD codes actually.
The setup is a Fedora 64 xfce as a Guest OS/VM inside Virtualbox. I’ve also done this on more powerful workstations, and also used it once inside a large cluster, but that’s back in 2021, and there I also had to build the entire solver, which was a fun exercise I recall.
So I consider the laptop setup not a very complicated setup actually, I just now get the feeling that things have changed when using a different conda
.
I probably could use the bundled MPI and prebuilt solver. But at first try I had the feeling it all was slower than it used to be, and I was at the same time a little puzzled over why I couldn’t build the solver as I could before, and if this was also linked to the overall performance.
I just also found an older VM using Anaconda (23.3.1), and there it accepts the mix of system MPI and other conda packages. The system MPI is used when building the solver, without any issues. And this was loaded prior to launching MFiX, by the command module load mpi
. So the big question is, how can I acheive the same using miniforge?
Have you tried
conda remove --force mpi openmpi
as suggested above?
Yes, tried it now, and it gives the same compilation error as before. So the system mpi
is found when module load mpi
is performed, it just will not combine with the rest of the solver components. And as already mentioned, it works with Anaconda, just not miniforge. Does that give any direction to what might be wrong?
Please enable the “verbose build” option, click “clean” then “build solver”, and send me the build.log
file.