It turned out I had to add MPICH as a GlueHostApplicationSoftwareRunTimeEnvironment in the information system. It's also essential to have GlueCEInfoLRMSType as pbs. It doesn't work if you put torque (it must be the only thing on the grid that actually cares!).
The job wrapper then adds some interesting arguments to the executable:
-p4pg NODELIST -p4wd PATH
Where NODELIST looks like this:
node067.beowulf.cluster 0 /tmp/.mpi/https_3a_2f_2fsvr023.gla.scotgrid.ac.uk_3a9000_2fPSn7TiiAJeV6R6w-0vQjtA/./dummy.sh
node070 1 /tmp/.mpi/https_3a_2f_2fsvr023.gla.scotgrid.ac.uk_3a9000_2fPSn7TiiAJeV6R6w-0vQjtA/./dummy.sh
node102 1 /tmp/.mpi/https_3a_2f_2fsvr023.gla.scotgrid.ac.uk_3a9000_2fPSn7TiiAJeV6R6w-0vQjtA/./dummy.sh
node139 1 /tmp/.mpi/https_3a_2f_2fsvr023.gla.scotgrid.ac.uk_3a9000_2fPSn7TiiAJeV6R6w-0vQjtA/./dummy.sh
and PATH is just the working directory for the job. Note the magic number "0" seems to be the place where the job executable runs and "1" are all the nodes where other job slots are reserved for this job.
So clearly the NODELIST file then needs to be taken by mpirun and used to start all the mpi subprocesses. From the EGEE MPI Wiki, the standard method seems to like to use the i2g mpi-start command, so the arguments must be in a form appropriate for it. Open questions remain, though:
- How to get i2g mpi-start to work. When I give it an MPI binary it seems determined to compile it - however this falls over, even though MPICH 1.2.7 is in the path.
- How do I ignore mpi-start and run a pre-prepared MPI binary, which will be what a Glasgow use wants to do.
- How on earth will torque account for all of this properly?
Further reading: EGEE-II-MPI-WG-TEC.doc.
No comments:
Post a Comment