GitEventhandler/H2GCN-PyTorch

The results are quite different

Closed this issue · 1 comments

Hello, when I downloaded the code and executed the Cornell dataset, there was a big gap between it and that in the paper, about 20%. At that time, I didn't change any parameters, so I want to ask you whether I didn't set any parameters in the citation

Hi! Thank you for report, Cornell dataset is a quite small dataset so I suppose the effect comes from backend is more significant. Result on bigger dataset such as PubMed seems more stable. It may need to modify hyper parameters such as seed, learning rate, hidden dim size, k value, or weight decay to reach the performance of the original implementation.