Benchmark performance with BERT?
yuchenlin opened this issue · 4 comments
yuchenlin commented
Hi Allan,
Thanks for the great repo. I was wondering if you would have the results of "This Implementation + BERT" similar to the numbers that you reported in https://github.com/allanj/pytorch_lstmcrf/blob/master/docs/benchmark.md ?
yuchenlin commented
And is there a plan for incorporating flair embeddings just like ELMO? https://github.com/flairNLP/flair
allanj commented
Yes. Both are plans on the way. I tried BERT (without fine-tuning) before but the performance was a bit lower than the one with ELMo. (I was using huggingface's code with version 0.6.0)
Seems like I have to push myself harder to incorporate the transformers
package into the repo. :P
Probably I will do it with Flair first
yuchenlin commented
you are the best :)