yuantn/MI-AOD

comparative experiment

Closed this issue · 2 comments

qhemu commented

Dear author, thank you very much for your work.
I have some questions about the comparative experiment.
How can we ensure that the random, entropy, core-set, and CDAL algorithms have the same MAP score in the initial round as shown in Figure 5 of the paper?

Because when I conducted my own experiments, I found that even if I used the same initial dataset, there were still differences in the initial round results. I think this may be related to the initialization parameters of the model, the random selection of samples in the batch, etc., which may cause the model to produce different weights and biases during the training process. Could you please advise me on how to address this issue?

yuantn commented

Thanks for attention to our work.

The reason why there are still differences in the results of setting the same random seed and using the same initial labeled set may be that the convolution algorithm selected by the model is different, which can be solved by adding --deterministic after the training command.

When --deterministic is set to True, it will make the network no longer search for the most suitable convolution algorithm (cudnn.benchmark = False), but use the default convolution algorithm (cudnn.deterministic = True), which helps to enhance the reproducibility of the results, but may reduce the running speed of the network.

qhemu commented

Thank you very much for your reply. It was very helpful and I will use your suggestion to redo the comparison experiment.