Different optimization criteria for matrix factorization in the code and in the paper
VConchello opened this issue · 4 comments
VConchello commented
In the paper, it’s said that MetaOD minimizes the sum of sDCG as the optimization criteria to factorise a matrix into latent factors (Section 3.4.1), but the code (core.py:91,156,166) is using the function ndcg_score
, from sklearn.metrics
, which is different than sDCG in some aspects.
And then for the gradient descent it uses the gradient of sDCG to find an optimum.
Is there any rationale for these changes?
yzhao062 commented
VConchello commented
yzhao062 commented
VConchello commented
Okay, I understood that the approximation would be used for both the gradient and the function to be minimised, not just to find the gradient.
Thank you for the answer.