title | emoji | colorFrom | colorTo | sdk | pinned |
---|---|---|---|---|---|
GPT4TurboApp |
📊 |
blue |
purple |
docker |
false |
If you need an introduction to
git
, or information on how to set up API keys for the tools we'll be using in this repository - check out Interactive Dev Environment for LLM Development which has everything you'll need to get started in this repository!
If you need an introduction to accessing the OpenAI API like a developer, or to containerizing and deploying the application to a Hugging Face space, check out Beyond ChatGPT - Build Your First LLM Application which has everything you'll need to ship and share this local application build!
In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application that leverages the latest models from OpenAI, GPT-4 Turbo and DALL·E 3.
For a step-by-step YouTube video walkthrough, watch this!
How to Build an LLM Application using GPT-4 Turbo and DALL·E 3
-
Clone this repo.
git clone https://github.com/AI-Maker-Space/GPT4AppWithDALLE3.git
-
Navigate inside this repo
cd GPT4AppWithDALLE3
-
Install the packages required for this python envirnoment in
requirements.txt
.pip install -r requirements.txt
-
Export Your OpenAI API Key to your local env. using
export OPENAI_API_KEY=XXXX
-
Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
chainlit run app.py -w
Great work! Let's see if we can interact with our chatbot to answer questions and generate images!