thunlp/ERNIE

Using finetuning scripts with BERT

dungtn opened this issue ยท 1 comments

Hi ๐Ÿ‘‹,

Thank you for the great work!

I'm trying to replicate the BERT baseline for downstream tasks. Is it possible to load BERT pre-trained instead of ERNIE pre-trained model in the fine-tuning code? If not, can you provide some pointers to the code you used for baseline?

I pointed the --ernie-model to BERT pre-trained but I got this error

Traceback (most recent call last):
File "code/run_typing.py", line 573, in main()
File "code/run_typing.py", line 511, in main
train_examples, label_list, args.max_seq_length, tokenizer_label, tokenizer, args.threshold)
File "code/run_typing.py", line 168, in convert_examples_to_features
tokens_a, entities_a = tokenizer_label.tokenize(ex_text_a, [h])
AttributeError: 'NoneType' object has no attribute 'tokenize'

Please let me know if I miss something here?

Thank you!
June

zzy14 commented

Yes, you can change the config file to load BERT pre-trained. Specifically, you need to set all layer types as "sim".