naver/sqlova

TypeError: __init__() missing 2 required positional arguments: 'is_training' and 'input_ids'

Closed this issue · 5 comments

While running train.py in the get_bert function, in line
model_bert = BertModel(bert_config)
I am facing this issue:

TypeError: init() missing 2 required positional arguments: 'is_training' and 'input_ids'

I solved it by using different bert file.

I solved it by using different bert file.

would you please say more details about this? how to use different bert file?

I solved it by using different bert file.

would you please say more details about this? how to use different bert file?

I do not remeber it now that which file I have used. So, I will try to help from my memory.
In the last section of the README, they have described that "bert/convert_tf_checkpoint_to_pytorch.py is from the previous version of huggingface-pytorch-pretrained-BERT, and current version of pytorch-pretrained-BERT is not compatible with the bert model used in this repo due to the difference in variable names (in LayerNorm). See this for the detail."

So, maybe I have not used the same BERT file which they are using. I think they have used "uncased_L-12_H-768_A-12", I mush have used some other model.

I solved it by using different bert file.

would you please say more details about this? how to use different bert file?

I do not remeber it now that which file I have used. So, I will try to help from my memory.
In the last section of the README, they have described that "bert/convert_tf_checkpoint_to_pytorch.py is from the previous version of huggingface-pytorch-pretrained-BERT, and current version of pytorch-pretrained-BERT is not compatible with the bert model used in this repo due to the difference in variable names (in LayerNorm). See this for the detail."

So, maybe I have not used the same BERT file which they are using. I think they have used "uncased_L-12_H-768_A-12", I mush have used some other model.

Thank you so much for your prompt help! I will try if this can solve my problem

I solved it by using different bert file.

would you please say more details about this? how to use different bert file?

I do not remeber it now that which file I have used. So, I will try to help from my memory.
In the last section of the README, they have described that "bert/convert_tf_checkpoint_to_pytorch.py is from the previous version of huggingface-pytorch-pretrained-BERT, and current version of pytorch-pretrained-BERT is not compatible with the bert model used in this repo due to the difference in variable names (in LayerNorm). See this for the detail."
So, maybe I have not used the same BERT file which they are using. I think they have used "uncased_L-12_H-768_A-12", I mush have used some other model.

Thank you so much for your prompt help! I will try if this can solve my problem

Hi, did you resolve this issue ? I'm working on it and facing same issue.