This is a companion repository for the paper Quality Metrics in Recommender Systems: Do We Calculate Consistently? Presented as a LBR poster at RecSys'21.
pred.csv
– EASE model predictionstest.csv
— test datares.csv
— metric values for different librariesres_auc.csv
— extra table with different versions of AUC
All metrics are calculated at depth cut-off k=20
, except for roc-auc
- hitrate
- precision
- recall
- map
- mrr
- ndcg
- roc-auc