The result of training is not consistent with the result of the paper
Closed this issue · 3 comments
ABCAI93 commented
First, thank you for the awesome package!I encountered some problems in the experiment.
- my experimental data is voc2007+voc2012. I trained 60,000 times with VOC 2007 initially and got a mAP of 0.5476, which was higher than that of the paper with 10% labeled samples. Could you tell me how big the mAP is when you do not label the samples?
- The results obtained by adding random samples are similar to those obtained by your paper method. The test results of the model obtained by your method after 60,000,80,000,120,000,140,000 iterations are as follows: the mAP values are 0.5476, 0.5570, 0.5798, 0.5845, 0.5826, respectively, while the results of adding samples by random method are 0.5067, 0.5502, 0.5670, 0.5720, and 0.5770. The results of the two methods seem to be the same.My pretrained model is resnet-101.
yanxp commented
@YONGHUICAI ,hello
1.the high mAP you got with the 10% labeled samples is because we used the fast rcnn with alexnet model. You can see the setting in our paper.
2.there are maybe the variance when random sampling.
ABCAI93 commented
Hello,
- I found that the code (tools / trainval_net. py, about 234 lines) when choosing automatic pseudo and manual annotation, when u = 1 selects manual annotation, u! = 1 (except the target box is the background) makes automatic pseudo annotation directly, the value of v calculated here is not used?
- Additional,the code does not seem to pick a high confidence sample to enter Automatic pseudo marking part of the code, is not your code is wrong?
yanxp commented
hello,
In the code,the u,v just as the signs. when u = 1,manual annotation,;when u!=1,then pseudo annotation. Actually, we used the cross-entropy loss to choose the sample. eg. np.log(s)>np.log(1-s) when s is a high confidence then enter Automatic pseudo .