Text classification examples - Tokenizer is defined twice
obesp opened this issue · 1 comments
obesp commented
The tokenizer is defined both in the model and the dataset in the BERT text classification examples.
multi_class.py, line 50:
self.tokenizer = transformers.BertTokenizer.from_pretrained( "bert-base-uncased", do_lower_case=True )
abhishekkrthakur commented
Indeed it is. its not needed in model. seems like a copy-paste error. ;) i will fix it.