Support for lm_eval 0.4.0
ZhiYuanZeng opened this issue · 4 comments
ZhiYuanZeng commented
I hope gpt-neox to support lm-eval==0.4.0. The current gpt-neox evaluate the models on benchmarks with lm-eval==0.3.0, which does not support many important datasets, for example, mmlu.
StellaAthena commented
@haileyschoelkopf i thought you had fixed this?
haileyschoelkopf commented
I thought so too. Though I’ll make a PR to bump the dependency now that we put 0.4.0 on pypi
ZhiYuanZeng commented
Perhaps it's because I'm still using the old version of gpt-noex. I'll check the differences with the latest version. Thanks for your help! By the way, could you tell me the commit ID where this issue was fixed?
StellaAthena commented
@ZhiYuanZeng It's here