Foundation models for Recommender System Paper List

Welcome to open an issue or make a pull request!

Keyword: Recommend System, pretraining, large language model, multimodal recommender system, transferable recommender system, foundation recommender models, universal user representation, one-model-fit-all, ID features, ID embeddings

These papers attempt to address the following questions:

(1) Can recommender systems have their own foundation models similar to those used in NLP and CV?

(2) Is ID embedding necessary for recommender models, can we replace or abondon it?

(3) Will recommender systems shift from a matching paradigm to a generating paradigm?

(4) How can LLM be utilized to enhance recommender systems?

(5) What does the future hold for multimodal recommender systems?

Paper List

Perspective paper: ID vs. LLM & ID vs. Multimodal

  • Where to Go Next for Recommender Systems? ID-vs. Modality-based recommender models revisited, SIGIR2023, 2022/09, [paper] [code]
  • Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights, arxiv 2023/05, [paper]
  • Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights, WSDM2024, [paper] [code]
  • The Elephant in the Room: Rethinking the Usage of Pre-trained Language Model in Sequential Recommendation, arxiv2024/04, [paper]

Datasets for Transferable or Multimodal RS

  • NineRec: A Benchmark Dataset Suite for Evaluating Transferable Recommendation, TPAMI2024, [paper] [link] | Images, Text, Nine downstream datasets
  • TenRec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems, NeurIPS 2022 [paper]
  • PixelRec: An Image Dataset for Benchmarking Recommender Systems with Raw Pixels, SDM 2023/09 [paper] |[link]| Images, Text, Tags, 200 million interactions
  • MicroLens: A Content-Driven Micro-Video Recommendation Dataset at Scale [paper] [link] [DeepMind Talk] | Images, Text, Video, Audio, comments, tags, etc.
  • MIND: A Large-scale Dataset for News Recommendation, ACL2020, [paper] | Text
  • Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation, SIGIR 2020 [link]
  • MoRec: [link] Netflix: [link] Amazon: [link]

Survey

  • A Survey on Large Language Models for Recommendation, arxiv 2023/05, [paper]
  • How Can Recommender Systems Benefit from Large Language Models: A Survey, arxiv 2023/06, [paper]
  • Recommender Systems in the Era of Large Language Models, arxiv, 2023/07, [paper]
  • A Survey on Evaluation of Large Language Models, arxiv, 2023/07, [paper]
  • Self-Supervised Learning for Recommender Systems: A Survey, arxiv, 2023/06, [paper]
  • Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems, 2022/09, [paper]
  • User Modeling in the Era of Large Language Models: Current Research and Future Directions,2023/12, [paper]
  • USER MODELING AND USER PROFILING: A COMPREHENSIVE SURVEY,2024/02, [paper]
  • Foundation Models for Recommender Systems: A Survey and New Perspectives, 2024/02, [paper]
  • Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey, 2024/05, [paper]

Large Language Models for Recommendation (LLM4Rec)

Scaling LLM

  • Emergent Abilities of Large Language Models, TMLR 2022/08, [paper]
  • Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights, arxiv 2023/05, [paper]
  • Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction, arxiv 2023/05, [paper]
  • Scaling Law for Recommendation Models: Towards General-purpose User Representations, AAAI 2023, [paper]

Untra Wide & Deep & Long LLM

  • StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking, SIGIR 2021, [paper]
  • A User-Adaptive Layer Selection Framework for Very Deep Sequential Recommender Models, AAAI 2021, [paper]
  • A Generic Network Compression Framework for Sequential Recommender Systems, SIGIR 2020, [paper]
  • Scaling Law of Large Sequential Recommendation Models, arxiv 2023/11, [paper]
  • Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations, arxiv 2024/03, [paper]
  • Breaking the Length Barrier: LLM-Enhanced CTR Prediction in Long Textual User Behaviors, SIGIR 2024, [paper]

Tuning LLM

  • M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems,arxiv 2022/05, [paper]
  • TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation, arxiv 2023/04, [paper]
  • GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation, 2023/04, [paper]
  • A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems, arxiv, 2023/08, [paper]
  • LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking, PGAI@CIKM 2023, [paper] [code]
  • Improving Sequential Recommendations with LLMs, arxiv 2024/02, [paper]

Freezing LLM [link]

  • CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models,arxiv 2022/04, [paper]
  • Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning, arxiv 2022/06, [paper]
  • Generative Recommendation: Towards Next-generation Recommender Paradigm, arxiv 2023/04, [paper]
  • Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights, arxiv 2023/05, [paper]
  • A First Look at LLM-Powered Generative News Recommendation, arxiv 2023/05, [paper]
  • Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models, arxiv 2023/05,[paper]
  • RecAgent: A Novel Simulation Paradigm for Recommender Systems, arxiv 2023/06, [paper]
  • Zero-Shot Next-Item Recommendation using Large Pretrained Language Models, arxiv 2023/04, [paper]
  • Can ChatGPT Make Fair Recommendation? A Fairness Evaluation Benchmark for Recommendation with Large Language Model, RecSys 2023
  • Leveraging Large Language Models for Sequential Recommendation, Recsys 2023/09, [paper]
  • LLMRec: Large Language Models with Graph Augmentation for Recommendation, WSDM 2024 Oral, [paper] [code]
  • Are ID Embeddings Necessary? Whitening Pre-trained Text Embeddings for Effective Sequential Recommendation, arxiv 2024/02 , [paper]

Prompt with LLM

  • Large Language Models are Zero-Shot Rankers for Recommender Systems, arxiv 2023/05, [paper]
  • Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5), arxiv 2022/03, [paper]
  • Language Models as Recommender Systems: Evaluations and Limitations, NeurIPS Workshop ICBINB 2021/10, [paper]
  • Prompt Learning for News Recommendation, SIGIR 2023/04, [paper]
  • LLM-Rec: Personalized Recommendation via Prompting Large Language Models, arxiv,2023/07 [paper]

ChatGPT [link]

  • Is ChatGPT a Good Recommender A Preliminary Study, arxiv 2023/04, [paper]
  • Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent, arxiv 2023/04, [paper]
  • Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System, arxiv 2023/04,[paper]
  • Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach, arxiv 2023/05, [paper]
  • Leveraging Large Language Models in Conversational Recommender Systems, arxiv 2023/05, [paper]
  • Uncovering ChatGPT’s Capabilities in Recommender Systems, arxiv 2023/05, [paper][code]
  • Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT, arxiv 2023/05, [paper]
  • Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation, arxiv 2023/05,[paper] [code]
  • Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT, arxiv 2023/05,[paper]
  • PALR: Personalization Aware LLMs for Recommendation, arxiv 2023/05, [paper]
  • Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models, arxiv 2023/05, [paper]
  • Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models, arxiv 2023/05, [paper]
  • CTRL: Connect Tabular and Language Model for CTR Prediction, arxiv 2023/06,[paper].

Multimodal Recommender System

  • VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback, AAAI2016, [paper]
  • Adversarial Training Towards Robust Multimedia Recommender System, TKDE2019, [paper]
  • Multi-modal Knowledge Graphs for Recommender Systems, CIKM 2020, [paper]
  • Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation, ACMMM 2023, [paper]
  • Self-Supervised Multi-Modal Sequential Recommendation, arxiv2023/02, [paper]
  • FMMRec: Fairness-aware Multimodal Recommendation, arxiv2023/10, [paper]
  • Self-Supervised Multi-Modal Sequential Recommendation, arxiv 2024/02, [paper]
  • ID Embedding as Subtle Features of Content and Structure for Multimodal Recommendation, arxiv2023/10, [paper]
  • Enhancing ID and Text Fusion via Alternative Training in Session-based Recommendation, arxiv2023/2, [paper]
  • BiVRec: Bidirectional View-based Multimodal Sequential Recommendation,arxiv2023/2, [paper]
  • A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation, arxiv2024/2, [paper]
  • An Empirical Study of Training ID-Agnostic Multi-modal Sequential Recommenders, arxiv2024/3, [paper]
  • Discrete Semantic Tokenization for Deep CTR Prediction, arxiv2024/3, [paper]
  • End-to-end training of Multimodal Model and ranking Model, arxiv2023/3, [paper]

Foundation and Transferable Recommender models

  • TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback, arxiv 2022/06, [paper]
  • Towards Universal Sequence Representation Learning for Recommender Systems, KDD2022,2022/06, [paper]
  • Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders, WWW 2023, [paper] [code]
  • UP5: Unbiased Foundation Model for Fairness-aware Recommendation, arxiv 2023/05, [paper]
  • Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights, arxiv 2023/05, [paper] [code]
  • OpenP5: Benchmarking Foundation Models for Recommendation, arxiv 2023/06, [paper]
  • Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language, arxiv 2023/10, [paper]
  • MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation, arxiv 2023/10, [paper]
  • Collaborative Word-based Pre-trained Item Representation for Transferable Recommendation, arxiv 2023/11, [paper]
  • Universal Multi-modal Multi-domain Pre-trained Recommendation, arxiv 2023/11, [paper]
  • Multi-Modality is All You Need for Transferable Recommender Systems, arxiv 2023, [paper]
  • TransFR: Transferable Federated Recommendation with Pre-trained Language Models, arxiv 2024/02 ,[paper]
  • Rethinking Cross-Domain Sequential Recommendation under Open-World Assumptions, arxiv 2024/02 ,[paper]
  • Large Language Models meet Collaborative Filtering: An Efficient All-round LLM-based Recommender System, arxiv 2024/04 ,[paper]

Universal General-Purpose, One4all User Representation Learning

  • Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation, SIGIR 2020, [paper], [code]
  • One4all User Representation for Recommender Systems in E-commerce, arxiv 2021, [paper]
  • Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training, ICDM 2021, [paper]
  • User-specific Adaptive Fine-tuning for Cross-domain Recommendations, TKDE 2021, [paper]
  • Scaling Law for Recommendation Models: Towards General-purpose User Representations, AAAI 2023, [paper]
  • U-BERT: Pre-training user representations for improved recommendation, AAAI 2021, [paper]
  • One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation, WSDM 2022, [paper]
  • Field-aware Variational Autoencoders for Billion-scale User Representation Learning,ICDE2022, [paper]
  • Learning Large-scale Universal User Representation with Sparse Mixture of Experts, ICML2022workshop, [paper]
  • Multi Datasource LTV User Representation (MDLUR), KDD2023, [paper]
  • Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning. arxiv2022/12, [paper]
  • USER MODELING AND USER PROFILING: A COMPREHENSIVE SURVEY,2024/02, [paper]
  • Generalized User Representations for Transfer Learning, arxiv 2024/03, [paper]
  • Bridging Language and Items for Retrieval and Recommendation,arxiv 2024/04, [paper]

Lifelong Universal User Representation Learning

  • One Person, One Model, One World: Learning Continual User Representation without Forgetting, SIGIR 2021, [paper], [code]
  • Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems, NeurIPS 2022 [paper]
  • STAN: Stage-Adaptive Network for Multi-Task Recommendation by Learning User Lifecycle-Based Representationg, Recsys 2023, [paper]
  • Task Relation-aware Continual User Representation Learning, KDD2023, [paper]
  • ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation, arxiv2023/08, [paper]

Generative Recommender Systems [link]

  • A Simple Convolutional Generative Network for Next Item Recommendation, WSDM 2018/08, [paper] [code]
  • Future Data Helps Training: Modeling Future Contexts for Session-based Recommendation, WWW 2020/04, [paper] [code]
  • Recommendation via Collaborative Diffusion Generative Model, KSEM 2022/08, [paper]
  • Blurring-Sharpening Process Models for Collaborative Filtering, arxiv 2022/09, [paper]
  • Generative Slate Recommendation with Reinforcement Learning, arxiv 2023/01, [paper]
  • Recommender Systems with Generative Retrieval, arxiv 2023/04, [paper]
  • DiffuRec: A Diffusion Model for Sequential Recommendation, arxiv 2023/04, [paper]
  • Diffusion Recommender Model, arxiv 2023/04, [paper]
  • A First Look at LLM-Powered Generative News Recommendation, arxiv 2023/05, [paper]
  • Recommender Systems with Generative Retrieval, arxiv 2023/05, [paper]
  • Generative Retrieval as Dense Retrieval, arxiv 2023/06, [paper]
  • RecFusion: A Binomial Diffusion Process for 1D Data for Recommendation, arxiv 2023/06, [paper]
  • Generative Sequential Recommendation with GPTRec, SIGIR workshop 2023, [paper]
  • FANS: Fast Non-Autoregressive Sequence Generation for Item List Continuation, WWW 2023, [paper]
  • Generative Next-Basket Recommendation, RecSys 2023
  • Large Language Model Augmented Narrative Driven Recommendations, RecSys 2023, [paper]
  • LightLM: A Lightweight Deep and Narrow Language Model forGenerative Recommendation, arxiv 2023/10, [paper]

Related Resources:

Recruitment

If you have an innovative idea for building a universal foundation recommendation model but require large-scale dataset and computational resources, consider joining our lab as an intern or visiting scholar. We can provide access to 100 NVIDIA 80G A100 GPUs and a billion-level dataset of user-video/image/text interactions.

The laboratory is hiring research assistants, interns, doctoral students, and postdoctoral researchers. Please contact the corresponding author for details.

实验室招聘科研助理,实习生,博士生和博士后,请联系yuanfajie@westlake.edu.cn.