Run
the test program with all available configurations on your
cluster.
1)Test
the TCP/IP-capable network fabric using:
$
mpirun -n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS
tcp ./myprog
You
should see one line of output for each rank, as well as debug
output indicating the TCP/IP-capable
network
fabric is being used.
2)Test
the shared-memory and DAPL-capable network fabrics using:
$
mpirun
-n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS
shm:dapl ./myprog
You
should see one line of output for each rank, as well as debug
output indicating the shared-memory
and DAPL-capable
network fabrics are being used.
3)Intel
MPI Library selects the most appropriate fabric combination
automatically. Set
I_MPI_DEVICE
=<device>:
<provider>
For
example,
I_MPI_DEVICE=rdma:OpenIB-cma0
to select InfiniBand explicitly.
Test
any other fabric using:
$
mpirun
-n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS
<fabric>
./myprog
where
<fabric> can be a supported fabric. For more information,
see Selecting a Network Fabric.
|
0 comments:
Post a Comment