Unable to Build Solver in Linux version

Hello managers. I’m a tyro in linux. I can’t build solvers for SMP,DMP and paraller slovers in a new-installed Centos 8 system. I use conda to install and open mfix 23.2. There are something strange during installing the mfix according to official guideline.

  1. I can’t “module load mpi” when execute 2.3.4 in the installation guideline. An error of “bash: module: command not found…” appears.
  2. when I build solver in GUI, errors occurs like below:

Running python -m mfixgui.build_mfixsolver -DCMAKE_Fortran_COMPILER=gfortran
Building custom solver for fluid_bed_tfm_2d.mfx
Running cmake command:
cmake -DCMAKE_Fortran_COMPILER=gfortran -DENABLE_PYMFIX=ON -G “Unix Makefiles” -DCMAKE_INSTALL_PREFIX=/root/sdb1/fluid_bed_tfm_2d -DUDF_DIR=/root/sdb1/fluid_bed_tfm_2d -DVERSION=23.2 /root/anaconda3/envs/mfix-23.2/share/mfix/src

– Setting build type to ‘RelWithDebInfo’ as none was specified.
– MFIX build settings summary:
– Build type = RelWithDebInfo
– CMake version = 3.26.4
– Fortran compiler = gfortran
– Fortran flags =
– ENABLE_MPI = OFF
– ENABLE_OpenMP = OFF
– ENABLE_CTEST = OFF
– ENABLE_COVERAGE = OFF
CMake Error: CMake was unable to find a build program corresponding to “Unix Makefiles”. CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
– Configuring incomplete, errors occurred!

                 BUILD FAILED

==========================================================================

  1. when I run mfix through conda, mfix can open successfully but the info shown in the terminal is “Qt: Session management error: None of the authentication protocols specified are supported”. Is that normal?
  2. I guess maybe i don’t install full parts inculding " Fortran compiler/GUN make/cmake/ openMPI". please help me how to install them.

Thanks a lot!

Hi @frankgao welcome to the MFiX forum!

A few answers for you:

  1. module load mpi is specific to the Joule supercomputer cluster at NETL. This does not apply to you.

  2. You will need to install GNU make. There are two approaches here:

    a) Install it in your OS (using apt, yum, or similar - I think CentOS uses dnf) - this requires administrator privileges

    b) Use conda to install CMake in the MFiX environment:

 $ conda activate mfix-23.2
 (mfix-23.2)$ conda install -c conda-forge make

(replace conda command with mamba if using mamba)

  1. “Qt: Session management error” usually happens when running the software remotely, you may need to use VirtualGL (vglrun). If that is not the case, then do a Google search for "“Qt: Session management error: None of the authentication protocols specified are supported” - this is not an MFiX-specific problem (it’s an issue with the Qt library, used by MFiX). Some postings suggest
    unset SESSION_MANAGER
    as a fix for this.

  2. As in item 2 - there is additional software you need to install. You can install it in the OS or in the Conda environment. For the next release, I think we will include CMake and GFortran in the Conda package so they will be guaranteed to be present (also, Conda-forge has a nice, recent version of GFortran 12, more up to date that most distro packages). So you can either use dnf to get the CentOS packages, or use conda install -c conda-forge to get the packages from conda-forge. Note that CentOS packages will install into /usr/bin while the Conda packages install into the MFiX Conda environment.

With dnf:

$ sudo dnf install cmake make gfortran openmpi

With conda/mamba

$ conda activate mfix-23.2
(mfix-23.2) conda install -c conda-forge cmake make gfortran openmpi

I recommend the conda method because it will get you more up-to-date packages (CentOS 8 is a bit behind the curve), but it’s really up to you.

Good luck!

Really grateful for your advise. I can run mfix successfully.
But new questions comes.

  1. I can’t run as root under mpirun although I have created the two environment variables (OMPI_ALLOW_RUN_AS_ROOT=1;OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1 ) through vim ~/.bashrc. And when I substitute the --use-hwthead-cpus to --allow-run-as-root, it works as root. I wonder know how can I add both options(–use-hwthead-cpus and --allow-run-as-root) in mpirun flags.
  2. My workstation has 96 cpus (48 cores). As a example, I can’t run in 50 nodes in DMP model. so what nodes(XYZ) means in the DMP (cpus or cores)?

My first question to you is why you need to run as root, this is usually not recommended. If you can avoid doing this, it would be better.

The .bashrc for root won’t be read for jobs started through MPI, it’s only used for login jobs. So your attempts to set these env. vars won’t work.

However, I see that the entry for mpirun flags on the run popup doesn’t allow you to specify more than one flag, and that is wrong. Thanks for pointing out this bug. We will fix it in the next release, in the meanwhile you can patch your version as follows:

  1. Exit any running MFiX applications
  2. With the mfix-23.2 environment active:
(mfix-23.2) cd $CONDA_PREFIX/lib/python3.10/site-packages/mfixgui/solver/
  1. Edit the file process.py and change line 84 from:
                run_cmd.append(mpirun_flags)

to

                run_cmd += mpirun_flags.split()
  1. Restart MFiX

This will allow you to specify multiple mpirun flags (separated by a space) in the run popup.

For your second question - the terminology for cores, cpus and threads is pretty confusing. For clarity, let’s say you have 48 physical cores, and 96 logical cores (or threads).

The DMP job will use NODESI*NODESJ*NODESK logical cores, or threads. So you should be able to run up to 96 nodes. However, by default OpenMPI checks the number of PHYSICAL cores available and will refuse to start a job that uses more than that number of nodes. It seems that OpenMPI doesn’t trust hyperthreading, I’m not sure why. But if you specify --use-hwthread-cpus then you can use all the available hyperthreads (logical cores) on your system. This is why we enable this flag by default.

1 Like

I’m sorry for replying late. And Thanks for your kindly answer. All question had been addressed.

No worries, I’m glad it’s all cleared up.