Minor issues in the code
Closed this issue · 1 comments
Aintky2000 commented
In train.py, line 973 & 982, maybe you can add "if args.distributed" in case that you run the code with single GPU, or there will be a bug.
deepcs233 commented
Hi!
Sorry, I have never ran the code without the distributed mode. If we only use one gpu, the value of torch.distributed.get_world_size() maybe 1?