NVIDIA/nccl-tests

2 Node Nccl Test don’t work

SdEnd opened this issue · 7 comments

I have two servers, Dell and FusionServer, nccl-test don't work ,but if all servers is same model,the ncct-test can work

my environment

os: ubuntu 22.04
cuda: 12.4
NV drvier: 550

when run command, after wait a hour and no response (different server)

mpirun  --allow-run-as-root -n 16 -N 8 --hostfile host  -x NCCL_DEBUG=INFO   /root/nccl-tests/build/all_reduce_perf -b 128M -e 1g -f 2 -g 1

then terminal show :
image

But when run ,it can work

mpirun  --allow-run-as-root -n 16 -N 8 --hostfile host  -x NCCL_DEBUG=INFO   /root/nccl-tests/build/all_reduce_perf -b 8 -e 128 -f 2 -g 1

image

why is this a problem?
Could it be because I'm using a different model of server?

Are you saying that it works for small message sizes (8B-128B) but hangs for larger ones (128B-1GB)? That could very well be; NCCL may choose different algorithm/protocol combinations depending on the message size, and some of them might be working on your systems while others fail.

We'll need a lot more info to diagnose this. In particular, complete outputs of runs with NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=INIT,ENV,TUNING (TUNING in particular should show us the algorithm/protocol that NCCL is using, so we should see what works and what does not). The output of nvidia-smi topo -m from both server node types would also be helpful. Finally, how's the interconnect between these servers? Are the NICs uniform across the different server types? Are all the NICs wired and can the servers communicate with each other using each NIC pair?

Are you saying that it works for small message sizes (8B-128B) but hangs for larger ones (128B-1GB)? That could very well be; NCCL may choose different algorithm/protocol combinations depending on the message size, and some of them might be working on your systems while others fail.

We'll need a lot more info to diagnose this. In particular, complete outputs of runs with NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=INIT,ENV,TUNING (TUNING in particular should show us the algorithm/protocol that NCCL is using, so we should see what works and what does not). The output of nvidia-smi topo -m from both server node types would also be helpful. Finally, how's the interconnect between these servers? Are the NICs uniform across the different server types? Are all the NICs wired and can the servers communicate with each other using each NIC pair?

@kiskra-nvidia
thanks, this log
8-1G.txt
8-128.txt

Huh... It appears to hang during the warmup iterations for buffer size 1GB (if I understand correctly, that happens for any buffer size above 128B?).

Did you verify that the IB network is fully operational between the nodes (using IB-specific benchmarks, not NCCL)?

Huh... It appears to hang during the warmup iterations for buffer size 1GB (if I understand correctly, that happens for any buffer size above 128B?)

Did you verify that the IB network is fully operational between the nodes (using IB-specific benchmarks, not NCCL)?

@kiskra-nvidia
yes, I verify the IB network, all test pass

I'm out of ideas then. @sjeaugey, @AddyLaddy, any idea why a run with 128B buffer limit would succeed but larger (1GB) runs hung (during warmup)? NCCL appears to choose tree/LL up to 128B, tree/SIMPLE for 1GB.

@SdEnd What do you configuare about the IB network

@SdEnd What do you configuare about the IB network

@jeffreyyjp
my cluster have 128 node, 8GPU per node, use Spine-Leaf IB network ,