google-deepmind/neural-processes

NP implementation in ANP

JuHyung-Son opened this issue · 1 comments

I think NP implementation in ANP is a little bit different with NP paper. right?

In original NP, there is the only latent encoder and latent encoder uses context and target data, while implementation has concated features of latent and deterministic and latent encoder uses only context data.

Thanks for your comment. In the implementation there is an argument use_deterministic_path of the initialiser of the model that determines whether or not you use the deterministic codes or not for the decoder. So you can set this to False if you wish to only use the latent path, as per the NP paper.

However you are right that currently the latent encoder only uses context data for training, and thus we have updated the notebook to fix this, so that it uses target data for training (note that targets contain contexts by design). Thank you for spotting this!