Details about the token-level labeling Task
Closed this issue · 3 comments
Thank you for your excellent job of the GEC task. I saw you mentioned the token-level labeling task. I didn't find it in this code. Could you give me more details about how to combine the token-level labeling task with the present work. Thank you for your amazing works again.
Yes, the code of the token-level task hasn't merged.
It needs alignment info between source and target sentences, and we use "fast_align". For more details please refer to Figure 1 and section 4.1 of the paper.
Thank you for your reply. Do you have any plan to release the detailed experimental codes of this part? Is the token-level task trained together with the seq2seq task ? If so, how do you choose the weight between these two tasks? Or just you trained a token-level model, and then a seq2seq model?
Maybe later.
It's trained together.