Notes on the latest work from WSDM 2024 in the area of recommender systems
➡️ Only include details of papers that have preprints available online.
-
Defense Against Model Extraction Attacks on Recommender Systems
Primer: -
Motif-based Prompt Learning for Universal Cross-domain Recommendation
Primer: -
Linear Recurrent Units for Sequential Recommendation
Primer: -
User Behavior Enriched Temporal Knowledge Graph for Sequential Recommendation
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
- Large Language Models for Data Aumgnetation in Recommendation - Current preprint title is LLMRec: Large Language Models with Graph Augmentation for Recommendation
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer:
Primer: Addressing the limitations of existing content-based recommender systems, this paper presents the ONCE framework, which leverages both open- and closed-source large language models (LLMs) to significantly enhance recommendation performance. Their findings demonstrate that combining finetuning on open-source LLMs with prompting-based data augmentation on closed-source models yields substantial improvements, with relative gains reaching up to 19.32% compared to state-of-the-art models. These results highlight the immense potential of LLMs in content-based recommendation and hold significant implications for online content platforms. Notably, the ONCE framework extends beyond news and book recommendation, demonstrating its applicability to diverse domains.