nshepperd/gpt-2

About Perplexity

curly0613 opened this issue · 0 comments

Hi. I'm D. Y. Kim and NLP developer in Korea.
First of all, Thank you so much for your project.
I got a lot of help from your project to build Korean GPT-2 model.

I have one question about metric such as Perplexity.
In open-ai's paper, they use perplexity to evalutate their model.
But I can't search perplexity in your code.
In your code, there are caculations which one is v_loss and other is avg_loss.
I guess that avg_loss or v_val_loss (in validation) are alternative metric.
Is it right?

If not, is there any method to calculate perplexity?