yuantn/MI-AOD

Train with the whole dataset

Closed this issue · 5 comments

Hello!

I try to run experiments that gradually use the whole dataset.
But when the proportion of the labeled data have been used came to 1100/1659, I run into the StopIteration problem.
I was wondering how to set the config that could use the whole labeled dataset when all the active learning cycles finish.

Thanks in advance.

Hello!

I have modified the order in which X_U selects samples in mmdet/utils/active_datasets.py, you can update this file and try again.

Thank you for the update. However, the problem occurred again.
Here is the running log.

2021-10-13 05:19:28,508 - mmdet - INFO - Epoch [1][600/1110] lr: 1.000e-03, eta: 0:02:12, time: 0.137, data_time: 0.005, memory: 2144, l_det_cls: 0.2635, l_det_loc: 0.1598, l_wave_dis: 0.0000, l_imgcls: 0.1265, L_wave_min: 0.5498
2021-10-13 05:19:28,653 - mmdet - INFO - Epoch [1][600/1110] lr: 1.000e-03, eta: 0:02:18, time: 0.138, data_time: 0.006, memory: 2144, l_det_cls: 0.2635, l_det_loc: 0.1598, l_wave_dis: 0.0000, l_imgcls: 0.1265, L_wave_min: 0.5498
2021-10-13 05:19:42,155 - mmdet - INFO - Epoch [1][650/1110] lr: 1.000e-03, eta: 0:02:00, time: 0.134, data_time: 0.005, memory: 2144, l_det_cls: 0.2483, l_det_loc: 0.1562, l_wave_dis: 0.0000, l_imgcls: 0.1240, L_wave_min: 0.5286
2021-10-13 05:19:42,303 - mmdet - INFO - Epoch [1][650/1110] lr: 1.000e-03, eta: 0:02:05, time: 0.135, data_time: 0.005, memory: 2144, l_det_cls: 0.2483, l_det_loc: 0.1562, l_wave_dis: 0.0000, l_imgcls: 0.1240, L_wave_min: 0.5286
2021-10-13 05:19:55,801 - mmdet - INFO - Epoch [1][700/1110] lr: 1.000e-03, eta: 0:01:47, time: 0.137, data_time: 0.006, memory: 2144, l_det_cls: 0.2607, l_det_loc: 0.1568, l_wave_dis: 0.0000, l_imgcls: 0.1211, L_wave_min: 0.5387
2021-10-13 05:19:55,941 - mmdet - INFO - Epoch [1][700/1110] lr: 1.000e-03, eta: 0:01:51, time: 0.137, data_time: 0.006, memory: 2144, l_det_cls: 0.2607, l_det_loc: 0.1568, l_wave_dis: 0.0000, l_imgcls: 0.1211, L_wave_min: 0.5387

image

Besides, I notice that the 'l_wave_dis' became zero. Would it be OK?

It shouldn't be zero. Which dataset and how many GPUs did you use?

I trained it with a private dataset. While this dataset might be kind of hard to detection task, in my previous work on the other model, sometimes the model can't detect any positives in the training progress. I was wondering if all the detections are negative(below the threshold), it will make 'l_wave_dis' zero.
And I trained the MIAOD on one single GPU.

It is possible because l_wave_dis represents the prediction discrepancy between the two classifiers. If both of them give negative classification results for all anchors, l_wave_dis will be 0.

But I have never seen such extreme results before.