EnyanDai/RSGNN

Not end-to-end joint training as described in paper

veghen opened this issue · 1 comments

In the paper it's said the GCN and link predictor are jointly trained in an end-to-end manner. But in the code it seems that it's not end-to-end and it's more like the alternating optimization schema in https://github.com/ChandlerBang/Pro-GNN. May I know what's the difference between the two?

Hi, thanks for your interests.
Your are right that we follow the alternating optimization in Pro-GNN. But this is still an end-to-end training as the gradients from the GCN classification loss will be propagated from GCN to the link predictor. The reason of alternating training them is to balance the training pace of two modules.