MPI Program Debugging and Other Special Option


How to verify which device/provider is used for communication?
#mpiexec -np numproc -genv I_MPI_DEBUG 2 $executable
  
NOTE : genv option to assign a
value to the I_MPI_FABRICS (or) I_MPI_DEVICE variable.
 
Set the I_MPI_DEBUG environment variable to two. The Intel MPI Library will report what device/provider is in use.
For example:

#mpiexec -np numproc -genv I_MPI_DEBUG 2 $executable
Select Particular Network For Communication Between MPI Process
I_MPI_DEVICE =<device>: <provider>
Ex 1: I_MPI_DEVICE=rdma:OpenIB-cma0
Ex 2: $ mpirun -n <No of processes>
    -env I_MPI_FABRICS shm:dapl <executable> 
Note : -genv option to assign a value to the I_MPI_FABRICS variable.


Run the test program with all available configurations on your cluster.

1)Test the TCP/IP-capable network fabric using:
$ mpirun -n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS tcp ./myprog
You should see one line of output for each rank, as well as debug output indicating the TCP/IP-capable network fabric is being used.

2)Test the shared-memory and DAPL-capable network fabrics using:
$ mpirun -n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS shm:dapl ./myprog
You should see one line of output for each rank, as well as debug output indicating the shared-memory and DAPL-capable network fabrics are being used. 

3)Intel MPI Library selects the most appropriate fabric combination automatically. Set I_MPI_DEVICE =<device>: <provider>
For example,
I_MPI_DEVICE=rdma:OpenIB-cma0 to select InfiniBand explicitly.


Test any other fabric using:
$ mpirun -n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS <fabric> ./myprog
where <fabric> can be a supported fabric. For more information, see Selecting a Network Fabric.


MPI NEW COMMANDS
#mpicc -Wall
#mpicc -showme:link
#mpi
NOTE : #ulimit (set soft limit and hard limit as unlimited to avoid issue).

0 comments:

Post a Comment