allanj/pytorch_neural_crf

About GLOVE result F1 91.36

Xzeffort opened this issue · 9 comments

model train using both train and development set ?

[dev set Total] Prec.: 93.78, Rec.: 93.07, F1: 93.42
[test set Total] Prec.: 91.11, Rec.: 90.56, F1: 90.84
reproduce experiment.

What do you mean?
I guess this is the performance for CoNLL-2003 dataset.

We just use the training set to train, and evaluate on both validation and the test set.

yes, this is the performance for CoNLL-2003 dataset.

I mean it is may be hard to get F1 91.36 by this way. In paper "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF" F1 is 91.21

so I am really curious about this result. Thank you for reply

For glove, I guess we can get around 90.9. Largely following lample's paper: neural architecture for NER.

yes, The finally result is 90.94. Thank you. Do you have plan to complete LSTM-CNNs-CRF?

It doesn't seem to be very difficult to implement this, I can probably work on that after the ACL ddl a few days later.
But let me know that if you are quite urgent, we can look into this together for you to modify

I thought it is quite common to use BERT-based model.

No, I'm not quite urgent, and indeed now more models based on BERT. Thank you for reply.