microsoft/LoRA

Question about the test set of the GLUE benchmark

James6Chou opened this issue · 1 comments

While reading the /examples/NLU/examples/text-classification/run_glue.py file, I noticed that the GLUE dataset only uses the validation set for generating results and does not measure accuracy on the evaluation set. Would it be better to evaluate the accuracy on the evaluation set using the model that performs best on the validation set in run_glue.py?

It's using exactly the evaluation set.