KG-augmented decoding is not applied
Opened this issue · 6 comments
Hello Ye, thanks for making the code public.
Not sure if I understand correctly, but from here it seems that the KG-augmented decoding layers are not applied during decoding. Could you let me know if there is anything I missed?
Also, could you check if there is any issue in the DecoderGATLayer
? For example, from here, ex_entity
may not be converted to embeddings before feeding into the exent_proj
function.
Thanks again for the help.
I have the same question. I found class DecoderGATLayer is not utilized in the KG-BART decoder.
Have you come to some conclusion about this?
Thank you in advance.
Hello Ye, thanks for making the code public.
Not sure if I understand correctly, but from here it seems that the KG-augmented decoding layers are not applied during decoding. Could you let me know if there is anything I missed?
Also, could you check if there is any issue in the
DecoderGATLayer
? For example, from here,ex_entity
may not be converted to embeddings before feeding into theexent_proj
function.Thanks again for the help.
in the paper, the author did not use that layer in the pretraining process.
I don't know if you mean the pretraining stage.
Same question here, and it looks like the decoder in the code does not work as the decoder in the paper. In the paper, a concept needs to concatenate with each of its expanded entity, but in the code, it only concatenates once.
@yeliu918 Could you have a look and explain the code a little bit? I am interested in your work but have trouble reproducing it based on the code you gave. Thx a lot!!