JosephKJ/OWOD

Do iOD experiments use validation set ?

Closed this issue · 1 comments

Hi, It is a very interesting paper! I tried to run the 19_p_1 experiment and was a little confused about the training scheme.

I follow the training sequence you mentioned, base_19 -> next_1_train_with_ud -> ft_with_unk, but the result is 63.1% mAP which is much lower than the result presented on the paper. I check the hyperparameters on the YAML files and find that the hyperparameters ENABLE_CLUSTERING and COMPUTE_ENERGY are set to FALSE on all three YAML files. Do I need to change them for ORE method running?

I read the YAML files for COCO experiments. Seems like, after training, you use a validation set to continue to train the model to acquire energy distribution. For the validation set, all the annotations are provided to generate the known and unknown labels. This doesn't satisfy the incremental learning setting since annotations for old and future classes should not be provided. Could you please classify how do you handle this in your experiments?

Also, in the iOD experiments, I do not find the YAML file for validation set training, do you do validation set training on iOD experiments or it is only for COCO experiments?

Thanks!

Hi @QYingLi, we dont need validation data in iOD setting. Are you training on an 8 GPU machine, if not, did you change the LR accordingly? Please read this discussion if that helps: #37 (comment)