You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Welcome to the GitHub repository dedicated to exploring and advancing large recommendation models. This repository will be continuously updated with the latest works and insights in this rapidly evolving field.
🔥🔥🔥 Scaling New Frontiers: Insights into Large Recommendation Models
Scalability Analysis: This pioneering paper delves into the scalability of large recommendation model architectures, leveraging popular Transformers such as HSTU, Llama, GPT, and SASRec. 🌟
Comprehensive Study: We conduct an extensive ablation study and parameter analysis on HSTU, uncovering the origins of scaling laws. Our work also enhances the scalability of the traditional Transformer-based sequential recommendation model, SASRec, by integrating effective modules from scalable large recommendation models. 🌟
Complex User Behavior: This is the first study to assess the performance of large recommendation models on complex user behavior sequence data, pinpointing areas for improvement in modeling intricate user behaviors, including auxiliary information, multi-behaviors, and cross-domain joint modeling. 🌟
Ranking Tasks Evaluation: To our knowledge, this is the first comprehensive evaluation of large recommendation models on ranking tasks, demonstrating their scalability. Our findings offer valuable insights into designing efficient large ranking recommendation models, with a focus on datasets and hyperparameters. 🌟
🔥🔥🔥 Predictive Models in Sequential Recommendations: Bridging Performance Laws with Data Quality Insights
Scalability Analysis: This paper introduces a Performance Law to address the scalability of Sequential Recommendation (SR) models by analyzing model performance rather than loss, aiming to optimize computational resource management. 🌟
Data Quality Extension: The study emphasizes understanding users' interest patterns through their historical interactions and introduces Approximate Entropy (ApEn) as a significant measure of data quality, improving the interaction data analysis critical for scaling law. 🌟
Comprehensive Study: We propose a novel correlation between model size and performance by fitting metrics such as hit rate (HR) and normalized discounted cumulative gain (NDCG), validated theoretically and experimentally across different models and datasets. 🌟
Optimizing Model Parameters: This approach facilitates the determination of optimal parameters for embedding dimensions and model layers, as calculated using the Performance Law, and observes potential performance gains when scaling the model across different frameworks. 🌟
🔥🔥🔥 A Survey on Large Language Models for Recommendation
Comprehensive Review: We present the first systematic review and analysis of the application of generative large models in recommendation systems, offering a foundational understanding of this innovative field. 🌟
Categorical Framework: Our research classifies current studies on large language models in recommendation systems into three distinct paradigms. This categorization provides a clear and structured overview, facilitating a deeper understanding of the diverse approaches within this emerging discipline. 🌟
Analysis of Strengths and Challenges: We evaluate the strengths and weaknesses of existing methods, identify key challenges faced by LLM-based recommendation systems, and offer insights to inspire future research in this promising area. 🌟
If you find our work useful, please cite it using the following references:
@article{scalingnewfrontiers,
title={Scaling New Frontiers: Insights into Large Recommendation Models},
author={Guo, Wei and Wang, Hao and Zhang, Luankang and Chin, Jin Yao and Liu, Zhongzhou and Cheng, Kai and Pan, Qiushi and Lee, Yi Quan and Xue, Wanqi and Shen, Tingjia and Song, Kenan and Wang, Kefan and Xie, Wenjia and Ye, Yuyang and Guo, Huifeng and Liu, Yong and Lian, Defu and Tang, Ruiming and Chen, Enhong},
journal={arXiv preprint arXiv:2412.00714},
year={2024}
}
@article{PerformanceLaws,
title={Predictive Models in Sequential Recommendations: Bridging Performance Laws with Data Quality Insights},
author={Shen, Tingjia and Wang, Hao and Wu, Chuhan and Chin, Jin Yao and Guo, Wei and Liu, Yong and Guo, Huifeng and Lian, Defu and Tang, Ruiming and Chen, Enhong},
journal={arXiv preprint arXiv:2412.00430},
year={2024}
}
@article{wu2024survey,
title={A survey on large language models for recommendation},
author={Wu, Likang and Zheng, Zhi and Qiu, Zhaopeng and Wang, Hao and Gu, Hongchao and Shen, Tingjia and Qin, Chuan and Zhu, Chen and Zhu, Hengshu and Liu, Qi and others},
journal={World Wide Web},
volume={27},
number={5},
pages={60},
year={2024},
publisher={Springer}
}