Inconsistency between the code and paper
Closed this issue · 2 comments
lei-liu1 commented
Hello,
Xiyuan, @Xi-yuanWang
Thanks for sharing the code.
I found that the code and the paper don't seem to match. In the paper, Equation (16) uses the concatenation of two nodes' representation and their common neighbors' representation. But in the code of CNLinkPredictor
class, it seems summation is used:
cn = adjoverlap(adj, adj, tar_ei, filled1, cnsampledeg=self.cndeg)
xcns = [spmm_add(cn, x)]
xij = self.xijlin(xi * xj)
xs = torch.cat(
[self.lin(self.xcnlin(xcn) * self.beta + xij) for xcn in xcns],
dim=-1)
Besides, torch.cat()
in this code seems useless since there is only one tensor in xcns
. Please correct me if I have misunderstood.
Best wishes,
Lei
Xi-yuanWang commented
Hi Lei,
- Yes, here we use sum instead of concatenation. But I don't think it will make a difference. As both xcn and xi*xj go through a linear layer, sum and concatenation are equivalent: Given two input vector
$a\in \mathbb R^{n_1}, b\in \mathbb R^{n_2}$ , two linear layers$W_1\in \mathbb R^{n\times n_1}, b_1\in\mathbb R^{n}, W_2\in \mathbb R^{n\times n_2}, b_2\in\mathbb R^{n}$ ,$(W_1a+b_1)+(W_2b+b_2)=[W_1||W_2][a||b]+(b_1+b_2)$ . - Yes, torch.cat() in this code is useless.
Sincerely,
Xiyuan Wang
lei-liu1 commented
Hello Xiyuan,
Thank you for your reply. Got it and thanks again.
Best regards,
Lei