failures when evaluating the model
YuanzeSun opened this issue · 2 comments
Hi, I met some bugs when evaluating the model.
in run_lm_eval.py :
[
LM Eval Harness
hflm = HFLM(pretrained=model_adapter.model, tokenizer=tokenizer, batch_size=args.batch_size)
]
the parameter 'pertained' is a model, however the HFLM function asserts it should be str. and the failure is shown below:
File "/data2/slicegpt/experiments/run_lm_eval.py", line 180, in
eval_main(eval_args)
File "/data2/slicegpt/experiments/run_lm_eval.py", line 145, in eval_main
hflm = HFLM(pretrained=model_adapter.model, tokenizer=tokenizer, batch_size=args.batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/lm-evaluation-harness-0.4.0/lm_eval/models/huggingface.py", line 103, in init
assert isinstance(pretrained, str)
AssertionError
This would be an issue with the lm eval harness package - could you please check that you're using the version from our .toml file?
Assuming this was the issue, feel free to re-open if not.