This demo showcases a Gradio application that converts spoken audio into a visual knowledge graph. Users can record their voice, and the application transcribes the audio, extracts relevant nodes and relationships, and visualizes them in a Neo4j knowledge graph.
- Anaconda or Miniconda
Clone this repository to your local machine using:
git clone https://github.com/Reynold97/VTKG.git
cd VTKG
Create a Conda environment using:
conda create -p ./env python=3.10 -y
Activate the newly created environment:
conda activate ./env
Install all required packages listed in requirements.txt:
pip install -r requirements.txt
You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script:
docker run \
--name neo4j \
-p 7474:7474 -p 7687:7687 \
-d \
-e NEO4J_AUTH=neo4j/pleaseletmein \
-e NEO4J_PLUGINS=\[\"apoc\"\] \
neo4j:latest
If you are using the docker container, you need to wait a couple of second for the database to start.
Create a .env file in the root directory with the following structure
DEEPGRAM_API_KEY=
OPENAI_API_KEY=
NEO4J_USER=
NEO4J_PASS=
NEO4J_URL=
Run the application with:
python app.py
This will start the Gradio app, which is accessible via a web browser. Follow the on-screen instructions to record your voice and generate the knowledge graph.
Record Audio: Click the 'Record' button and speak into your microphone. Submit: Submit the audio recording to process. View Graph: After processing, go to your Neo4j instance to inspect the knowledge graph.