Getting the embedding for the sample code
UsmanGohar opened this issue · 3 comments
Hi,
First of all, thank you so much for not only sharing this but promptly responding to queries. It has helped me a lot.
I just have a quick question. Could you possibly point me where I could retrieve the actual embeddings that generate the final Natural Language output? I am combing through the codebase as I speak, but if you know that, it would help save a ton of time.
Thank you!
What do you mean by "retrieve the actual embeddings that generate the final Natural Language output"? Are you referring to decoding?
Yes (sorry for the confusion). So I dug a little deeper. If I understand correctly, this the decoder output
NeuralCodeSum/c2nl/translator/translator.py
Line 201 in 0e19751
NeuralCodeSum/c2nl/translator/translator.py
Lines 250 to 252 in 0e19751
(I apologize for the slightly convoluted question)
Yes, the first one you should use.
You don't need to decode because you want to get contextual representations that are produced by the decoder (if I understood your use case). In decoding, we pass the contextual representations through the softmax layer, which I think you don't want.