yuantn/MI-AOD

comparison with other baseline methods(random, entropy)

Closed this issue · 4 comments

#43

Dear Author,

i want to generate some results for baseline methods, such as random and entropy. I saw your following response for issue #43.
Hi,

The code with 2 baselines (random and entropy) is easy to implement. You only need to modify the function calculate_uncertainty in mmdet/apis/test.py.

For 2 other approaches (Core-set and CDAL), please refer to here (Core-set) and here (CDAL).

It should be noted that all other methods do not use the two adversarial classifiers and the MIL classifier.

Hope this is useful for you :)

My query regarding the issue:

I have a query regarding the modification in the function [calculate_uncertainty] under (https://github.com/yuantn/MI-AOD/blob/master/mmdet/apis/test.py#L15) in mmdet/apis/test.py.

Currently, for MI-AOD, it is estimating uncertainty as:
loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)
uncertainty_all_N = loss_l2_p.mean(dim=1)
arg = uncertainty_all_N.argsort()
uncertainty_single = uncertainty_all_N[arg[-cfg.k:]].mean()
uncertainty[i] = uncertainty_single

Can you suggest a way to modify this funcitojn? will using uncertainty_all_N without uncertainty_all_N .argsort() work for Random baseline method ?

If so, wont all the minimizaiton amd maximization of uncertainty be same as MI-AOD ?

It will be great if you can suggest proper modification for entropy based method as well.

Thank you for your time and consideration.

For random sampling, you can use torch.rand() to create a random uncertainty tensor.

For entropy sampling, you can calculate the information entropy based on the confidence score output by the model, and use it as the uncertainty.

Dear Author,

For entropy sampling based method, it will be great if you can give some insights on confidence score predicted by model;
The output of the model in test.py is listed as:
y_head_f_1, y_head_f_2, y_head_cls = model(return_loss=False, rescale=True, return_box=return_box, **data)

  1. Can we use loss_l2_p estimated as below in place of confidence score ?
    loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)
    Then, estimating entropy based uncertainty using confidence score of model as:
    uncertainty_all_N = Categorical(probs = loss_l2_p).entropy()

  2. Or it should be y_head_cls as a confidence score of the model and then estimate the entropy using this prediction ?
    uncertainty_all_N = Categorical(probs = y_head_cls ).entropy()

Should we follow step 1 or step 2 for entropy based sampling ?

Thank you for your time and consideration.

yuantn commented

The confidence score is y_head_cls, which has been described sufficiently and clearly in the paper and code.