xiaxin1998/DHCN

precision and recall

tyh7425 opened this issue · 6 comments

Excuse me, why the code is gone?Excuse me, why is the result of running the author's code very different from that in the paper (for example, diginetica dataset, P@20 value is about 50 in the paper , but recall@20 value is about 17 in the code).Is it because it is not set correctly?

Could you provide your hyperparameters? We present the best hyperparameters in our paper on Diginetica dataset. Please refer to https://ojs.aaai.org/index.php/AAAI/article/view/16578.

Hello,I used this code and the default hyperparameters. I didn't change it.

I have run our codes without any changes, the results are the same with those reported in our paper. And others who run our codes before also got normal results. Do you use our provided dataset? Could you provide more information?

Hello, I have found the problem and solved it successfully.
The code line 50 "session_emb_lgcn = np.sum(session, 0)", and line 74 "session_emb_lgcn = np.sum(session, 0)" runs in my environment with an error.
I found the reason on internet:the summation calculation by numpy must be calculated in the CPU, and then the calculation of tensor by numpy must convert type and ignore the gradient information.
but I made the wrong changes before, put the tensor into the CPU and ignore the gradient for calculation by "item_embeddings.cpu().detach().numpy()".

so,I want to know why you run successfully. Is it because the version of numpy is different or something else?

Maybe it is because environment. Our numpy version os 1.18.1.

Well, thank you for your reply