lancopku/SRB

Dealing Start and Stop Tokens

Opened this issue · 0 comments

Hi,
It is difficult to understand how you have dealt with the start and stop tokens. I see that you are appending stop token (2) to decoder_output in the end. This decoder_output is only used to compute the loss.
https://github.com/lancopku/SRB/blob/master/DataLoader.py#L51

You do not seem to append any stop token for decoder_input which is used to train the decoder. You are only using a start token at the training time.
https://github.com/lancopku/SRB/blob/master/SeqUnit.py#L156

However, during the generation time, you check if the token predicted equals stop token.
https://github.com/lancopku/SRB/blob/master/SeqUnit.py#L199

How do you expect to predict a stop token? when you are not using one in the training time. Is this a bug? or am i missing something obvious. Would appreciate your response regarding this.