lizekang/ITDD

output from both the first decoder and second decoder are same

Closed this issue · 7 comments

While translating I saw that output from both the decoders are coming exactly the same. Is it because I'm using a different dataset ?

Which dataset are you using? If the dataset is too easy, the output from the first decoder and the second decoder may be the same.

Also, during translation I saw that after returning the outputs from the first decoder it is not making predictions using the second decoder since
if all((b.done() for b in beam)):
break
this part is returning true and it comes out of the for loop.
Can you explain this part ?

Also, doesn't getting same output means it is not able to use the given knowledge base properly?

Also, during translation I saw that after returning the outputs from the first decoder it is not making predictions using the second decoder since
if all((b.done() for b in beam)):
break
this part is returning true and it comes out of the for loop.
Can you explain this part ?

I didn't meet this issue in my experiments. I will check it after a while.

Also, doesn't getting same output means it is not able to use the given knowledge base properly?

Doesn't getting the same output means it's able to use the given knowledge base in the second decoding process. You can see the comparison between the first decoder results and the second decoder results, which is presented in the paper.

Also, doesn't getting same output means it is not able to use the given knowledge base properly?

Doesn't getting the same output means it's able to use the given knowledge base in the second decoding process. You can see the comparison between the first decoder results and the second decoder results, which is presented in the paper.

We encounter the same "first decoder equals second" problem when using the CMUDoG dataset. We test several different data scales but the problem remains, it seems that the second decoder only performs as a PPL decliner.