Skip to content

Latest commit

 

History

History
50 lines (34 loc) · 1.55 KB

README.md

File metadata and controls

50 lines (34 loc) · 1.55 KB

Retrieval Augmented Generation (RAG)

Overview

This repository contains an introduction to Retrieval Augmented Generation (RAG) using the LangChain framework. For this project, I will use the Mixtral8x7b open source Large Language Model (LLM) and see how to "augment" its knowledge using user-specific (private) data. This project is divided into three parts:

  1. RAG101: This is a beginner's level introduction to RAG with LangChain. You will learn how to:

    • Load open source models using the Huggingface API in Langchain.
    • Prompt your loaded models.
    • Augment the LLM's knowledge with private data in a "naive way".
    • Go to RAG101.
  2. RAG102: This part introduces the key componenet of a conversation - memory. You will learn:

    • The different kinds of memory algorithm supported by LangChain.
    • Add memory to retrievals in LangChain to enable conversation.
    • Go to RAG102.

Usage

1. Setup the environment

mkdir RAG
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip

2. Clone the Repository

git clone https://github.com/Ibrahim-Ola/RAG.git
cd RAG

3. Install Source Code in Editable Mode

pip install -e .

4. Deactivate Environment

After running the experiments, you can deactivate the virtual environment by running the command below.

deactivate