Hi, where can i find the Supplementary materials pdf?
Closed this issue · 2 comments
Thanks.
Another question is the default setting of eval_part for run_all.py is 'test', so does it means that you use the test set to evaluate the checkpoint during model training? It seems a lot wired.
best wishes
I want to know how to reproduce the result in your paper, so should I run the run_all.py with eval_part=test and then run the run_eval.py with val dataset and the checkpoint of best_model.cpt ?
Thanks.
Hi! Sorry for late response.
The model does not use val/test data during training, of course, and only uses it for evaluation. Setting the flag --eval_part
to test
may be useful to monitor the progress during training in terms of the main metric. But in case you want to select the best checkpoint, of course it should be set to val
, and then the best checkpoint should be evaluated with the run_eval.py
script. It is more important in the FN task where the F-metric is noisy and intermediate epochs may produce slightly better checkpoints; in the VM task the last checkpoint is usually the best, if I remember correctly. We specified --eval_part
to test
by default in the scripts just because this is the main metric.
As for the supplementary material, you can find it in the arxiv version of the paper: https://arxiv.org/abs/2010.07987.
Again sorry for slow response.