Perplexity metric does not apply batching correctly to tokenization
ChengSashankh opened this issue · 1 comments
ChengSashankh commented
When I try to evaluate my model's text generation using the perplexity metric, the batch_size parameters in perplexity._compute(..) was not sufficient, because it tries to tokenize and move the entire set of predictions to GPU. A simple change to move the tokenization to each batch fixes the issue for me.
Also, it should be possible to pass my own model and tokenizer (since my model is not publishable on huggingface) to the metric. I have made these changes to enable my experiments.
I have made changes to fix this. I can open a PR to commit these changes, if this sounds good to you. I believe this will benefit the developer community.
abhibambhaniya commented
I am also facing the same issue.