Fine-tuning with Retrieval Augmentation with llama or any open source model #9503
Replies: 1 comment
-
🤖 Hey @Manikanta5112! Nice to see you back in our little corner of the code world. Hope this day finds you in top coding shape! Yes, fine-tuning with Retrieval Augmentation has been implemented using the Llama model. The core idea is to allow the Language Model (LLM) to better use the context from a given retriever or ignore it entirely. This is referenced from the RA-DIT paper: https://arxiv.org/abs/2310.01352. You can find a notebook that demonstrates this in the As for the specific functionality of the Llama model in the LlamaIndex repository, the To implement fine-tuning with Retrieval Augmentation using the Llama model in the LlamaIndex repository, you would need to follow these steps:
questions = generate_synthetic_queries_over_documents(documents)
ce_dataset_list = generate_ce_fine_tuning_dataset(documents, questions) You can find more details in the llama_index/finetuning/cross_encoders/dataset_gen.py file. I hope this helps! If you have any more questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Hi Community,
Did anyone have tried/experience in implementing a Fine-tuning with Retrieval Augmentation with llama or any open source model? is so, I really appreciate the heads up to know how you did
Beta Was this translation helpful? Give feedback.
All reactions