This is the NLP section for the robotic arm
The goal is to build a real-time speech recognization model, which is based on OpenAI Whisper.
The first stage is to build the audio IO and the audio-to-text file. This is done in the audio_recog.py.
The second stage is about information extraction(IE). Check this in stage2 folder.
The third stage is visualization and building a virtual chessboard. Check this in stage3 folder.
The fourth stage is to set up the api for the whole project. Possibly achieved by interative website.
The nlp.py is the combination of all models so far. It shall always be updated to latest version.