/evalRS-CIKM-2022

Official Repository for EvalRS @ CIKM 2022: a Rounded Evaluation of Recommender Systems

Primary LanguageJupyter NotebookMIT LicenseMIT

EvalRS-CIKM-2022

Official Repository for EvalRS @ CIKM 2022: a Rounded Evaluation of Recommender Systems

https://reclist.io/cikm2022-cup/

Note: EvalRS 2022 was held during CIKM 2022 (October 2022) as a data challenge and workshop. This README and the related links provide an overview of the competition and archive the artifacts from the event to benefit the RecSys community. If you are interested in running the evaluation loop exactly as it was during EvalRS 2022, the original README, with rules, instructions and full guidelines can be found untouched here.

Important: while this README is an archive for the event and the workshop, all the code, data, tests and evaluation methodology are still fully available in this very repository. If you are working on the evaluation of RecSys, or you wish to run your latest model on Last.FM through a set of diverse tests, you can (and should!) re-use this repository and our leaderboard (as a ready-to-go baseline).

Overview

This is the official repository for EvalRS @ CIKM 2022: a Rounded Evaluation of Recommender Systems. The aim of the challenge was to evaluate recommender systems across a set of important dimensions (accuracy being one of them) through a principled and re-usable sets of abstractions, as provided by RecList 🚀. EvalRS is based on the LFM-1b Dataset, Corpus of Music Listening Events for Music Recommendation: participants were asked to solve a typical user-item scenario and recommend new songs to users.

During CIKM 2022, we organized a popular workshop on rounded evaluation for RecSys, including our reflections as organizers of the event, the best paper presentation and keynotes from two renown practitioners, Prof. Jannach and Prof. Ekstrand.

If you are interested in running the same evaluation loop on your own model, re-use our baselines, or simply revisit the rules and guidelines of the original event, please check the official competition README. The original README includes also in-depth dataset analyses and explanations on how to run a model and add a custom test to RecList. For an introduction to the main themes of this competition and details on our methodology, please refer to the workshop presentation and paper.

Papers, code, presentations from EvalRS are all freely available for the community through this repository: check the appropriate sections below for the Award recipients and the materials provided by organizers and partecipants.

If you like our work and wish to support open source RecSys projects, please take a second to add a star to RecList repository.

Quick links

Organizers

This Data Challenge was built in the open, with the goal of adding lasting artifacts to the community. EvalRS was a collaboration between practitioners from industry and academia, who joined forces to make it happen:

For inquiries, please reach out to the corresponding author.

How to Cite

If you find our code, datasets, tests useful in your work, please cite the original WebConf contribution as well as the EvalRS paper.

RecList

@inproceedings{10.1145/3487553.3524215,
    author = {Chia, Patrick John and Tagliabue, Jacopo and Bianchi, Federico and He, Chloe and Ko, Brian},
    title = {Beyond NDCG: Behavioral Testing of Recommender Systems with RecList},
    year = {2022},
    isbn = {9781450391306},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3487553.3524215},
    doi = {10.1145/3487553.3524215},
    pages = {99–104},
    numpages = {6},
    keywords = {recommender systems, open source, behavioral testing},
    location = {Virtual Event, Lyon, France},
    series = {WWW '22 Companion}
}

Challenge Review

@misc{Tagliabue2023,
  doi = {10.1038/s42256-022-00606-0},
  url = {https://doi.org/10.1038/s42256-022-00606-0},
  author = {Tagliabue, Jacopo and Bianchi, Federico and Schnabel, Tobias and Attanasio, Giuseppe and Greco, Ciro and Moreira, Gabriel de Souza P. and Chia, Patrick John},
  title = {A challenge for rounded evaluation of recommender systems},
  publisher = {Nature Machine Intelligence},
  year = {2023}
}

EvalRS

@misc{https://doi.org/10.48550/arxiv.2207.05772,
  doi = {10.48550/ARXIV.2207.05772},
  url = {https://arxiv.org/abs/2207.05772},
  author = {Tagliabue, Jacopo and Bianchi, Federico and Schnabel, Tobias and Attanasio, Giuseppe and Greco, Ciro and Moreira, Gabriel de Souza P. and Chia, Patrick John},
  title = {EvalRS: a Rounded Evaluation of Recommender Systems},
  publisher = {arXiv},
  year = {2022},
  copyright = {Creative Commons Attribution 4.0 International}
}

Sponsors

This Data Challenge was possible thanks to the generous support of these awesome folks. Make sure to add a star to our library and check them out!

Awards

Student Awards

  • Wei-Wei Du
  • Flavio Giobergia
  • Wei-Yao Wang
  • Jinhyeok Park
  • Dain Kim

Best Paper Award

  • Item-based Variational Auto-encoder for Fair Music Recommendation, by Jinhyeok Park, Dain Kim and Dongwoo Kim (500 USD)

Best Test Award

  • Variance Agreement, by Flavio Giobergia (500 USD)

Leaderboard Awards

Ranking Team Score
1 lyk 1.70
2 ML 1.55
3 fgiobergia 1.33
4 wwweiwei 1.18
5 Sunshine 1.14
  • First prize, lyk team (3000 USD)
  • Second prize, ML team (1000 USD)

Workshop Presentations

Papers and Repositories

Team Title Paper Repo
wwweiwei Track2Vec: Fairness Music Recommendation with a GPU-Free Customizable-Driven Framework paper arxiv code
fgiobergia Triplet Losses-based Matrix Factorization for Robust Recommendations paper arxiv code
ML Item-based Variational Auto-encoder for Fair Music Recommendation paper arxiv code
Scrolls Bias Mitigation in Recommender Systems to Improve Diversity paper code
yao0510 RecFormer: Personalized Temporal-Aware Transformer for Fair Music Recommendation paper code
lyk Diversity enhancement for Collaborative Filtering Recommendation paper code

Selected papers also appear in the Proceedings of the CIKM 2022 Workshops, co-located with 31st ACM International Conference on Information and Knowledge Management (CIKM 2022).