Why not evaluate the model with the entire validation set during training?
Closed this issue · 3 comments
Hi Okan,
Thanks for your great work!
I'm currently trying to reproduce the results reported in your paper. While I noticed in your code there is a training option '--n_val_samples' which defines the number of samples for each category during validation. This is a bit confusing because usually, we'd like to evaluate the model with the whole validation set, Could you let me know the reasoning behind this training option? It also would be super helpful if your code could handle the case where all validation samples are covered during training.
Hi Yushian, yes you are right. It would be better to validate on the whole validation set instead of using only one clip at each video. However, validating on the complete set would take too long (even longer than one epoch), and most of the time validating with clip accuracy would result the same compared to validating with video accuracy.
@okankop
Thanks for your reply. It makes more sense now.
I believe issue is resolved