This is a LlamaIndex multi-agents project using Workflows.
This example is using three agents to generate a blog post:
- a researcher that retrieves content via a RAG pipeline,
- a writer that specializes in writing blog posts and
- a reviewer that is reviewing the blog post.
There are three different methods how the agents can interact to reach their goal:
- Choreography - the agents decide themselves to delegate a task to another agent
- Orchestator - a central orchestrator decides which agent should execute a task
- Explicit Workflow - a pre-defined workflow specific for the task is used to execute the tasks
First, setup the environment with poetry:
Note: This step is not needed if you are using the dev-container.
poetry install
Then check the parameters that have been pre-configured in the .env
file in this directory. (E.g. you might need to configure an OPENAI_API_KEY
if you're using OpenAI as model provider).
Second, generate the embeddings of the documents in the ./data
directory:
poetry run generate
Third, run the agents in one command:
poetry run python main.py
Per default, the example is using the explicit workflow. You can change the example by setting the EXAMPLE_TYPE
environment variable to choreography
or orchestrator
.
To add an API endpoint, set the FAST_API
environment variable to true
.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex.
- Workflows Introduction - learn about LlamaIndex workflows.
You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!