opendilab/InterFuser

Minor issues in the code

Closed this issue · 1 comments

In train.py, line 973 & 982, maybe you can add "if args.distributed" in case that you run the code with single GPU, or there will be a bug.

Hi!
Sorry, I have never ran the code without the distributed mode. If we only use one gpu, the value of torch.distributed.get_world_size() maybe 1?