A lightweight neural network inferencing engine written in C++. This library was designed with the intention of being used in real-time systems, specifically real-time audio processing.
Currently supported layers:
- dense
- GRU
- LSTM
- Conv1D
- MaxPooling
- BatchNorm
Currently supported activations:
- tanh
- ReLU
- Sigmoid
- SoftMax
- ELu
For a complete reference of the available functionality, see the API docs. For more information on the design and purpose of the library, see the reference paper.
If you are using RTNeural as part of an academic work, please cite the library as follows:
@article{chowdhury2021rtneural,
title={RTNeural: Fast Neural Inferencing for Real-Time Systems},
author={Jatin Chowdhury},
year={2021},
journal={arXiv preprint arXiv:2106.03037}
}
RTNeural
is capable of taking a neural network that
has already been trained, loading the weights from that
network, and running inference. Some simple examples
are available in the examples/
directory.
Neural networks are typically trained using Python
libraries including Tensorflow or PyTorch. Once you
have trained a neural network using one of these frameworks,
you must "export" the network weights to a json file,
so that RTNeural
can read them. An implementation of
the export process for a Tensorflow model is provided in
python/model_utils.py
, and can be used as follows.
# import dependencies
import tensorflow as tf
from tensorflow import keras
from model_utils import save_model
# create Tensrflow model
model = keras.Sequential()
...
# train model
model.train()
# export model weights
save_model(model, 'model_weights.json')
Next, you can create an inferencing engine in C++ directly from the exported json file:
#include <RTNeural.h>
...
std::ifstream jsonStream("model_weights.json", std::ifstream::binary);
auto model = RTNeural::json_parser::parseJson<double>(jsonStream);
Before running inference, it is recommended to "reset" the state of your model (if the model has state).
model->reset();
Then, you may run inference as follows:
double input[] = { 1.0, 0.5, -0.1 }; // set up input vector
double output = model->forward(input); // compute output
The code shown above will create the inferencing engine dynamically at run-time. If the model architecture is fixed at compile-time, it may be preferable to use RTNeural's API for defining an inferencing engine type at compile-time, which can significantly improve performance.
// define model type
RTNeural::ModelT<double, 8, 1
RTNeural::DenseT<double, 8, 8>,
RTNeural::TanhActivationT<double, 8>,
RTNeural::DenseT<double, 8, 1>
> modelT;
// load model weights from json
std::ifstream jsonStream("model_weights.json", std::ifstream::binary);
auto model = RTNeural::json_parser::parseJson<double>(jsonStream);
modelT.parseJson(jsonStream);
modelT.reset(); // reset state
double input[] = { 1.0, 0.5, -0.1 }; // set up input vector
double output = modelT.forward(input); // compute output
RTNeural
is built with CMake, and the easiest way to link
is to include RTNeural
as a submodule:
...
add_subdirectory(RTNeural)
target_link_libraries(MyCMakeProject LINK_PUBLIC RTNeural)
If you are trying to use RTNeural in a project that does not use CMake, please see the instructions below.
RTNeural
supports three backends,
Eigen
,
xsimd
,
or the C++ STL. You can choose your backend by passing
either -DRTNEURAL_EIGEN=ON
, -DRTNEURAL_XSIMD=ON
,
or -DRTNEURAL_STL=ON
to your CMake configuration. By
default, the Eigen
backend will be used. Alternatively,
you may select your choice of backends in your CMake
configuration as follows:
set(RTNEURAL_XSIMD ON CACHE BOOL "Use RTNeural with this backend" FORCE)
add_subdirectory(modules/RTNeural)
In general, the Eigen
backend typically has the best
performance for larger networks, while smaller networks
may perform better with XSIMD. However, it is recommended
to measure the performance of your network with all the
backends that are available on your target platform
to ensure optimal performance. For more information see the
benchmark results.
RTNeural also has experimental support for Apple's
Accelerate
framework (-DRTNEURAL_ACCELERATE=ON
).
Please note that the Accelerate
backend can only be
used when compiling for Apple devices, and does not
currently support defining compile-time inferencing
engines.
Note that you must abide by the licensing rules of whichever backend library you choose.
If you would like to build RTNeural with the AVX SIMD extensions,
you may run CMake with the -DRTNEURAL_USE_AVX=ON
. Note that
this flag will have no effect when compiling for platforms that
do not support AVX instructions.
To build RTNeural's unit tests, run
cmake -Bbuild -DBUILD_TESTS=ON
, followed by
cmake --build build
. To run the full testing suite,
run ./build/rtneural_tests all
. For more information,
run ./build/rtneural_tests --help
.
To build the performance benchmarks, run
cmake -Bbuild -DBUILD_BENCH=ON
, followed by
cmake --build build --config Release
. To run the layer benchmarks, run
./build/rtneural_layer_bench <layer> <length> <in_size> <out_size>
. To
run the model benchmark, run ./build/rtneural_model_bench
.
To build the RTNeural examples run:
cmake -Bbuild -DBUILD_EXAMPLES=ON
cmake --build build --config Release
The example programs will then be located in
build/examples_out/
, and may be run from there.
An example of using RTNeural within a real-time audio plugin can be found on GitHub here.
If you wish to use RTNeural in a project that doesn't use CMake, RTNeural can be included as a header-only library, along with a few extra steps.
-
Add a compile-time definition to define a default byte alignment for RTNeural. For most cases this definition will be one of either:
RTNEURAL_DEFAULT_ALIGNMENT=16
RTNEURAL_DEFAULT_ALIGNMENT=32
-
Add a compile-time definition to select a backend. If you wish to use the STL backend, then no definition is required. This definition should be one of the following:
RTNEURAL_USE_EIGEN=1
RTNEURAL_USE_XSIMD=1
-
Add the necessary include paths for your chosen backend. This path will be one of either:
<repo>/modules/Eigen
<repo>/modules/xsimd/include/xsimd
It may also be worth checking out the example Makefile.
Contributions to this project are most welcome! Currently, there is considerable need for the following improvements:
- Better implementation of convolutional layers:
- Implement more options (grouping, stride, etc...)
- Implement Conv2D
- Support for exporting/loading PyTorch models
- More robust support for exporting/loading Tensorflow models
- Support for more activation layers
- Better test coverage
- Any changes that improve overall performance
General code maintenance and documentation is always appreciated as well! Note that if you are implementing a new layer type, it is not required to provide support for all the backends, though it is recommended to at least provide a "fallback" implementation using the STL backend.
Shout out to the following individuals for their important contributions:
- wayne-chen: Softmax activation layer and general API improvements
- hollance: RTNeural logo
- stepanmk: Eigen Conv1D layer optimization
RTNeural is currently being used by several audio plugins and other projects:
- Chow Centaur: A guitar pedal emulation plugin, using a real-time recurrent neural network.
- Chow Tape Model: An analog tape emulation, using a real-time dense neural network.
- BYOD: A guitar distortion plugin containing several machine learning-based effects.
- GuitarML: GuitarML plugins use machine learning to model guitar amplifiers and effects.
- rt-neural-lv2: A headless lv2 plugin using RTNeural to model guitar pedals and amplifiers.
- cppTimbreID: An audio feature extraction library.
- MLTerror15: Deeply learned simulator for the Orange Tiny Terror with Recurrent Neural Networks.
- 4000DB-NeuralAmp: Neural emulation of the pre-amp section from the Akai 4000DB tape machine.
- ToobAmp: Guitar effect plugins for the Raspberry Pi.
If you are using RTNeural in one of your projects, let us know and we will add it to this list!
RTNeural is open source, and is licensed under the BSD 3-clause license.
Enjoy!