DeepGraphLearning/KnowledgeGraphEmbedding

Loss function of TransE and RotatE in the code

ngl567 opened this issue · 2 comments

Thank you for your excellent research and codes. However, I am confused about why you use the same loss function for TransE and RotatE? I think the loss functions of TransE and RotatE are different according to their definitions in the orginal paper. I hope you can explain it. Thank you.

Hi Guanglin,

We use the same loss function (i.e., self-adversarial loss) for both TransE and RotatE because we think that the proposed loss function is a general loss function that can be applied to any transition-based KGE models. As you can see and as you can reproduce, the self-adversarial loss can improve the performance of TransE to 0.332, which is much higher than previous results.

Besides, since there's no much difference between negative sampling loss and margin-based ranking criterion for TransE (see our Table 13), we simply use the negative sampling loss for TransE.

Thanks a lot for your careful explaination. Wish you more success.