zharry29/wikihow-goal-step

GPT-2 model error

WangRuoke opened this issue · 1 comments

when I tried to evaluate your pretrained models, all gpt-2 models(goal_benchmark_gpt, step_benchmark_gpt, order_benchmark_gpt) raised the same error
Traceback (most recent call last):
File "./transformers/examples/multiple-choice/run_multiple_choice.py", line 254, in
main()
File "./transformers/examples/multiple-choice/run_multiple_choice.py", line 200, in main
if training_args.do_eval
File "/users4/qkshi/rkwang/eri/WikiHow/transformers/examples/multiple-choice/utils_multiple_choice.py", line 132, in init
pad_token_segment_id=tokenizer.pad_token_type_id,
File "/users4/qkshi/rkwang/eri/WikiHow/transformers/examples/multiple-choice/utils_multiple_choice.py", line 543, in convert_examples_to_features
return_overflowing_tokens=True,
File "/users4/qkshi/anaconda2/envs/wikihow/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1571, in encode_plus
"Unable to set proper padding strategy as the tokenizer does not have a padding token. "
ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the pad_token (tokenizer.pad_token = tokenizer.eos_token e.g.) or add a new pad token via the function add_special_tokens if you want to use a padding strategy

Hi,
Please make sure that you are using the correct version of huggingface transformers, and the run_multiple_choice.py file modified by us (which allows the use of GPT-2). Our colab notebook here is a working demo on how to use the models, and it also specifies info like the version. Please let us know if you have more questions!