lxtGH/SFSegNets

GPU not being used in eval Cityscapes?

cuizy3 opened this issue · 1 comments

Hi, thanks for your repo and models. I am trying to use the resnet50 (Cityscapes) model to evaluate on the Cityscapes test images and I noticed that it's taking ~3s/ frame, which seems not right. I checked my cpu/gpu usage and the gpu does not seem to be utilized much (see attached image).
gpu_usage_sfnet
Is the cpu usage because mmcv-cpu was installed? What should I do so that the model can evaluate in realtime?
I use the scripts in ./scripts/submit_test_cityscapes to submit the job. I also ran it on 8 GPUs just in case.

Update: following your suggestion in another issue of running speed_test, it showed a FPS of ~22 on 1 Nvidia Tesla V100 (should be faster than your FPS from 1080?). Why is there such a discrepancy between the FPS from the speed test vs when I actually evaluate the images? How can I check whether the images are loaded on the gpu and the model is running from the gpu? Thanks!

lxtGH commented

Hi! For the real time FPS testing, we ignore the data loader times, calculate the FPS within the forward.
Moreover, you can merge bn to speed it up, but we do not do this.
For running on Tensor RT-deployment, we report official time from the TensorRT lib.

For testing on Citysccapes, we only support single GPU inference.