Skip to content

Data-efficient Fine-tuning for LLM-based Recommendation (SIGIR'24)

Notifications You must be signed in to change notification settings

Linxyhaha/DEALRec

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Data-efficient Fine-tuning for LLM-based Recommendation

💡 This is the pytorch implementation of our paper

Data-efficient Fine-tuning for LLM-based Recommendation

Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, Tat-Seng Chua

Environment

  • Anaconda 3

Install the environment with the .yaml file and run

conda env create -f DEALRec.yaml

Usage

Data

The experimental data are in './data' folder, including Games, MicroLens-50K, and Book.

🔴 Pruning

The code for data pruning, including the score calculation and the coverage-enhanced sample selection is in './code/prune/'. You can prune the data by running

python -u prune.py --data_name=$1 --model_name=$2 --lamda=$3 --k=$4 --log_name=$5 --gpu_id=$6

or use prune.sh

sh prune.sh <data_name> <surrogate_model_name> <lamda> <group_number> <log_name> <gpu_id>
  • The selected samples' indices will be saved in './code/prune/selected/' folder.
  • The explanation of hyper-parameters can be found in './code/prune/utils.py'.
  • The default hyper-parameter settings are detailed in './code/prune/hyper-parameters.txt'.

🌟 The surrogate model implemented here is SASRec. But it is highlighted that DEALRec is applicable to any other surrogate models, e.g., DCRec (refer to Section 4.3.2).

🔵 Few-shot Fine-tuning

Fine-tune LLM-based recommender model (BIGRec) with few-shot samples obtained from pruning process. The code for fine-tuning is in 'code/finetune/'. Fine-tune BIGRec with few-shot samples and get the results by running

sh finetune.sh <data_name> 

⚪ Examples

  1. Prune the data on Games
cd ./code/prune/
sh prune.sh games SASRec 0.3 50 log 0
  1. Fine-tune BIGRec with few-shot samples (set at 1024 by default).
cd ./code/finetune/
sh finetune.sh games

Evaluation

The codes and running scripts for evaluation are in 'code/finetune/data/' folder.

Citation

If you find our work is useful for your research, please consider citing:

@inproceedings{lin2024data,
  title={Data-efficient Fine-tuning for LLM-based Recommendation},
  author={Lin, Xinyu and Wenjie Wang and Li Yongqi and Yang, Shuo and Feng, Fuli and Wei, Yinwei and Chua, Tat-Seng},
  booktitle={Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},
  year={2024}
}

License

NUS © NExT++

Releases

No releases published

Packages

No packages published