sukjunhwang/VITA

How to use multiple GPUs for inference?

xjtuwh opened this issue · 6 comments

I try to test a sequence with 399 frames by demo.py, but there is an error like follows
RuntimeError: CUDA out of memory. Tried to allocate 23.72 GiB (GPU 0; 23.70 GiB total capacity; 595.83 MiB already allocated; 20.43 GiB free; 1.63 GiB reserved in total by PyTorch)
I want to know how can I use multi-GPUs?Thanks.

Hi @xjtuwh ,
could you please tell me the terminal command that you used?

python demo_vita/demo.py --config-file configs/coco/vita_ir_R50_bs16_50ep.yaml --input /media/wuhan/disk1/dataset/DSAT/data1/*.bmp --output DSAT_output/data1 --save-frames True --opts MODEL.WEIGHTS output/model_final.pth

There are 399 images in input ditectory. The model_final,pth is pretained on our custom dataset. We only train our model on images like coco. I find that the images are transformed to 800*800 and I think that this cause out of memory on a single GPU.

I try to use multiple GPUs by modify # demo = VisualizationDemo(cfg, parallel=True, conf_thres=args.confidence_threshold), but this is useless.

Could you try the below two?

  1. set MODEL.VITA.ENC_WINDOW_SIZE to 6
  2. since the resolution is high, decrease MODEL.VITA.TEST_RUN_CHUNK_SIZE such as 6

Ok. thank you very much.