zjunlp/Relphormer

What are the main differences between Relphormer and Kgtrransformer

WEIYanbin1999 opened this issue · 3 comments

Hi! The paper: "Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries" was published as a KDD'22 paper, which proposes 'Kgtransformer'.

It seems that Triple2seq implemented in your code is similar as the mask mechanism in 'Kgtransformer'. For Link Prediction, Sampling neighbors of center/query entity, then mask the query and recover, delete the nodes about origin query triplet information to avoid leakage.

both follow the pre-training LM(BERT) for semantic text, then fine-tuning workflow.

The loss targets for Relphormer and Kgtransformer both include MKM loss and contextual loss(similar formula).

I am looking forward to your help to explain what are the main differences/advances between Relphormer and Kgtransformer?

Thanks a lot for your answering.

Our work, akin to Kgtransformer, was also submitted to the KDD conference in 2022; however, there are notable differences between the two:

(1) We address distinct problems. Our objective is to construct a unified KG representation framework, providing representations for downstream tasks such as knowledge graph completion and question answering. In contrast, Kgtransformer concentrates on resolving complex query issues (quite different datasets). Additionally, the Transformer architecture differs; Kgtransformer employs a mix-of-experts approach, while Relphormer is a dense transformer.

(2) The training procedures also diverge. We design a knowledge masking object (it is not novel now) and sample subgraphs to more effectively learn textual semantic and graph structural information, incorporating an additional attention bias module. Without the textual semantic information (which Kgtransformer does not utilize), Relphormer's performance would degrade, as demonstrated in our ablation studies.

Furthermore, our work was submitted to arXiv in 22 May 2022 (rejected by KDD2022 with scores of 5, 6, 7, and 7, and subsequently rejected by EMNLP 2022 and WWW 2023). Kgtransformer originates from the same time period, likely a mere coincidence.

Thanks for you attention.

Thanks again for your patience and well explanation.

I notice that your code shows a good performance of 0.314 for hit@1 on Fb15k-237, which has been #1 on 'paperwithcode'. But it is still in the working progress.

Could you tell me what are the main limitations you think for Relphormer to become a completed work?

Hi, actuarially the writing and experimentation of this paper are currently not sufficient, and we will continue to optimize the writing and fine-tune the experiments in the future.