EleutherAI/gpt-neox

Support for lm_eval 0.4.0

ZhiYuanZeng opened this issue · 4 comments

I hope gpt-neox to support lm-eval==0.4.0. The current gpt-neox evaluate the models on benchmarks with lm-eval==0.3.0, which does not support many important datasets, for example, mmlu.

@haileyschoelkopf i thought you had fixed this?

I thought so too. Though I’ll make a PR to bump the dependency now that we put 0.4.0 on pypi

Perhaps it's because I'm still using the old version of gpt-noex. I'll check the differences with the latest version. Thanks for your help! By the way, could you tell me the commit ID where this issue was fixed?