Encountering Error When Submitting tasks to Slurm Cluster

Hello everyone,

When submitting tasks to our slurm cluster, I encountered the following error message. I’m unsure how to proceed to resolve it. Could anyone provide guidance on what might be causing this and how to fix it?

image

I would greatly appreciate any insights or suggestions on troubleshooting steps. Please let me know if you need additional information to help diagnose the problem.

Thank you very much for your help!

Best regards,

Is there a local sysadmin you can ask for help? This is not really an MFiX issue. I do not know anything about your HPC cluster, like how many CPUs are available, etc. You can try the suggested overload-allowed directive but it will result in poor performance.

Hi, Charles,

Thank you for your reply. :grinning:

We have confirmed that the number of cores we are using locally is sufficient, so it is quite peculiar that we are experiencing this issue. Once again, thank you for your response; I will try alternative methods to see if the problem can be resolved.

Best regards,

Are you using the MFiX GUI to submit the job or are you submitting it by some other method? What does the job submission script look like? For MFiX it is a file called .qsubmit_script (not sure why it’s a hidden file!)

I submit the MFiX job via an sbatch script, as shown below.

image

I don’t know what R750XA-2 is but -np 10 is not consistent with the 108 processes listed in your error message. And -l INFO should not be needed (INFO is the default log level)

Hello,
I recommend to contact your local cluster support, as MPI things are highly subject to local configuration.
A couple of hints, though:

  • the 108 processes is suspicious, maybe your .mfix file contains a different domain decomposition? (look for nodesi nodesj nodesk)
  • SLURM often requires the use of ‘srun’ instead of ‘mpirun’ (and probably you should recompile MFIX against your local MPI instead of the conda one) check with your local cluster support
  • newer MPI wants to bind to cores, often SLURM reserves “CPUs” (which often are hyperthreads not cores), you can work around that by using --cpus-per-task=2 check with your local cluster support

srun is for submitting the job to the batch scheduler, mpirun is for running parallel jobs. When running on a cluster, both are used - srun to submit a job, which in turn uses mpirun to run the MFiX solver in parallel.

srun is for submitting the job to the batch scheduler, mpirun is for running parallel jobs

as I said, MPI and SLURM configs are highly local to each individual cluster that’s why the OP should ask his cluster support.
Our cluster (of which I am the admin dealing with MPI) and several supercomputers (e.g. LUMI) do not have an mpirun anymore, i.e. MPI is compiled and linked against SLURM and srun replaces the mpirun command. You would use srun command inside an sbatch script. We use MFIX with the system MPI, not the one from the conda install (on our cluster, whenever users bring their own MPI, there are problems).

1 Like

I agree, this is an issue best taken up with local support. I was not aware of srun being compiled against MPI and replacing mpirun, thanks for explaining!