[lug] Processor assignment

Davide Del Vento davide.del.vento at gmail.com
Thu Mar 31 10:09:28 MDT 2016


Hi Rob,

> Thanks for the help.
You're very welcome.

>> If it's a single one, why
>> are you using MPI in the first place?
> These are particle accelerator codes written by many different people/labs
Ah, I see. I think a bit ago you mentioned this project you were working on.

>> It's overkill. If it's multiple
>> server, why aren't you running a resource manager such as slurm?
>
> Slurm, Torque, etc. are the next phase, and I'll be glad to have a
> conversation about that in the future.
At my place, the sysadmins take care of that, so I know only the
user-side aspect of it. But my understanding is that these days they
are pretty easy to install and configure. If you plan to use a
resource manager, I'd do it sooner rather than later, since it may
solve your issue (see below).

> After your comment about MPI being overkill, I extended the example to use
> fork instead of MPI. With ordinary forked processes, cores are assigned
> properly. I need to do more testing with this case to be sure there's a real
> distinction between MPI and plain fork.

As you know, fork creates new processes by "splitting" the one you
fork. I am not sure about what OpenMPI does on a single node, but it
might do what it does on multiple nodes which is connecting "remotely"
(e.g. by ssh, or other means) and starting "fresh" processes splitting
from the sshd daemon (instead of its "normal" parent). In that case,
without a resource manager, process placement will be left to what the
OS does for sshd (and root is involved, where in plain fork it can all
be in user). Hence my suggestion to jump straight to the process
manager, which certainly have dials for tuning this, e.g. task
geometry and the likes.

Hope this helps,
Davide


More information about the LUG mailing list