嵌入式计算系统及SoC软硬件协同设计课程大作业。
- Download Google speech command dataset
- Put the dataset outside this project, specifically, at
../speech/
git clone https://github.com/zYeoman/ML-KWS-for-FPGA
- CD in ML-KWS-for-FPGA and make and run kws
wget http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz
tar xzvf speech_commands_v0.0.1.tar.gz speech
git clone https://github.com/zYeoman/ML-KWS-for-FPGA && cd ML-KWS-for-FPGA
make
# Run CNN model for the first 10 test.
./kws cnn 10
# Run CRNN model for all test
./kws crnn
# Run CNN_Q(Quantization CNN)
./kws cnn_q
In Vivado HLS, use kws(uint32_t*, int32_t*)
or kws_q
or kws_crnn
as top function, test.cpp
as test source.
In Vivado, Open xillinux blockdesign, and connect the IP core and Xillybus as shown belown.
./bit Bitstream
./Makefile Makefile
./silence.wav Testfile
./include Header And Model Parameters
./src Source