RUCAIBox/HaluEval

I added HaluEval to lm-eval-harness, can you please double check?

Opened this issue · 3 comments

Here is my pull request: EleutherAI/lm-evaluation-harness#1076

Thanks!

Sorry for the late reply. We have double-checked the PR and think there is no problem with this implementation.

@Xiaoxue-xx thank you! 🙂 You can find the latest version of the tasks here: https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/tree/main/src/backend/tasks/halueval
We used it for our Hallucinations Leaderboard: https://huggingface.co/blog/leaderboards-on-the-hub-hallucinations

@Xiaoxue-xx, would you also be happy if we switch from an open-ended generation task to a multiple-choice task, where the model is only allowed to generate "yes" or "no"?

Sure. Thank you for your attention to HaluEval!