Memory leak issue and question about num_classes
Madelynnn opened this issue · 2 comments
Dear authors,
Thank you very much for this fine piece of science. I am using your work as inspiration for my own research. While I was using your model, I came across a memory leak issue in mean_ap. On line 310, a multiprocessing pool is opened. However, this one is never closed, leading to memory leak and oom issues on my system (I use a 32gb video ram, so I was surprised). The pool should be closed after line 365, this is also an issue known to the authors of mmdetection, and solved in a more recent version of their code.
Also, I have a question about the num_classes parameter in the model config file for training cityscapes. For the semantic_head part, this is set to 19. I can understand this, as the total number of the stuff + thing classes for cityscapes is 19, as is defined by the trainingId in the cityscapesscripts helpers/labels.py file. However, for the bbox_head and the mask_head, this is set to 9. Is this value a result of only the thing classes (8) + 1 (background)= 9? Does that mean that the bbox_head and the mask_head can only predict 8 classes (the thing classes), while there are 19 classes in total? I am confused by these values, because in my opinion you also want your bbox_head and mask_head to classify stuff_classes. Can you elaborate on how you came to these values?
Thanks in advance for answering my question.
nevermind, I figured it out.
Hi, can you tell how to set the num classes for bbox_heada and mask_head?