TuSimple/mx-maskrcnn

The speed of train_maskrcnn on coco

zl1994 opened this issue · 3 comments

I use 4 GTX 1080(single image per GPU) to alternatively train mask r-cnn on COCO. When training RPN, the speed can reach 8 sample/sec. But when training mask r-cnn, it varies and slow. Sometimes 2 sample/sec and some times 0.1 sample/sec, and the Volatile GPU-util is 0 in most of the time. In conclusion, there are three question:

  1. Why train RPN is much than train mask r-cnn?
  2. Why the speed of train mask r-cnn varies?
  3. Why the Volatile GPU-util is 0, does it cause training mask r-cnn slow?

I also have similar questions, don't know how to solve it. My speed is 0.7 samples per second. Also very slow

I found that, the most time is cost for getting the batch data. I tried to prefetch the batch data in multipy process, but it didn't work(still slow). Any other solutions?
@xuw080 @zl1994

rxqy commented

Hi, I also encountered a similar problem here.
Any suggestions?