ROMIO and Intel-MPI
ROMIO, in various forms, provides the MPI-IO implementation for just about every MPI implementation out there. These implementations incorporate ROMIO’s hints when they pick up our source code, but they also add additional tuning parameters via environment variables.
The Intel MPI library uses ROMIO, but configures the file-system specific drivers a bit differently. in MPICH, we select which file system drivers to support at compile-time with the --with-file-system
configure flag. These selected drivers are compiled directly into the MPICH library. Intel-MPI builds its file-system drivers as loadable modules, and relies on two environment variables to enable and select the drivers
- I_MPI_EXTRA_FILESYSTEM
- I_MPI_EXTRA_FILESYSTEM_LIST
Let’s say you had a Lustre file system, like [edit: archives have moved] This fellow on the HDF5 mailing list Then you would invoke mpiexec like this:
mpiexec -env I_MPI_EXTRA_FILESYSTEM on \ -env I_MPI_EXTRA_FILESYSTEM_LIST lustre -n 2 ./test
I found this information in the Intel MPI library Reference Manual, which contains a ton of other tuning parameters.
(Update 12 May 2015): Intel 5.0.2 and newer have GPFS support. One would enable it the same way with the I_MPI_EXTRA_FILESYSTEM_LIST
mpiexec -env I_MPI_EXTRA_FILESYSTEM on \ -env I_MPI_EXTRA_FILESYSTEM_LIST gpfs
(Update 7 April 2023): Intel MPI has added a few more file systems: panasas (panfs) and DAOS (daos) are supported too.