-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Hello and welcome to the wiki for Thorn.jl, a spiking neural network simulation library written in Julia. This library uses purely event-driven simulations, so it might be considerably slower than most other simulation libraries. However, there are some interesting things in the pipeline such as GPU and FPGA computation (for those who have one).
For now, Thorn.jl has not been registered with the Julia registry so it is necessary to download this repository and install from the local copy. You can use git clone https://github.com/TenzinCHW/Thorn.jl.git
to clone the repository and then install with Julia by first entering the Julia IDLE and using the following commands.
install <path-to-Thorn.jl>
First, you'll want to include Thorn in the script that you are using Thorn in. You can do this with using Thorn
. This is similar to from Thorn import *
from the Python language.
The basic unit in Thorn is the Cortex
, which is a system of neuron populations connected together. To create one, you have to pass in the parameters for the neuron populations that will comprise the Cortex
. There are two types of NeuronPopulation
s. The first is called InputPopulation
, which converts an array of numbers (the time series input data) into an array of spikes. The second is called ProcessingPopulation
, which converts an input array of spikes into another input array. For our example, we will use the PoissonInpPopulation, which produces spikes using a Poisson process proportional to the input value as our InputPopulation
and the LIFPopulation
, which emulates the leaky integrate and fire (LIF) model, as our ProcessingPopulation
.
In reality, you can have as many InputPopulation
s and ProcessingPopulation
s as you like in a cortex, where the array of pairs conn
describes whether two populations are connected. For N InputPopulation
s and M ProcessingPopulation
s, the ID numbering starts from 1 to N for the InputPopulation
s and continues from N+1 to N+M for the ProcessingPopulations
. I couldn't really come up with a much better way to specify the connectivity.
using Thorn
inp_sz = 5 # number of neurons in input population
proc_sz = 10 # number of neurons in processing population
spiketype = LIFSpike # type of spike to propagate through the network
input_neuron_types = [(PoissonInpPopulation, inp_sz)]
proc_neuron_types = [(LIFPopulation, proc_sz, weight_update, learning_rate)]
weight_update = stdp! # use the stdp weight update algorithm
learning_rate = 0.1
conn = [(1=>2, rand, weight_update, learning_rate)] # Connects the input population (1) to LIF population (2)
cortex = Cortex(input_neuron_types, proc_neuron_types, conn, spiketype)
For more information about the optional parameters for creating a Cortex
, please visit the API reference.
In order to train the created Cortex
, we use the function process_sample!
. The exclamation mark in the name of a function in Julia implies that it changes the state of the inputs (impure functions). It's just a convention and is not required by the language itself. This function takes in a Cortex
object, some data (array of array of values), as well as a maximum value (we need to scale the input by the maximum value that an input value can take to figure out the rate of the Poisson process producing the spikes). By default, the maximum rate is 1.0
.
num_sample = 100
data = [rand(inp_sz, num_sample)] # The use of rand is to generate some arbitrary data
process_sample!(cortex, data)
As explained above, data
is an array of arrays, specifically a 1D array of 2D arrays. Each inner array corresponds to the input for the respective InputPopulation
. So effectively, the number of elements in data
should be equal to N, the number of input populations, and the size of the first dimension is the number of neurons in the respective input population, while the second dimension represents the time domain of the data.
Source code: https://github.com/TenzinCHW/Thorn.jl Issue tracker: https://github.com/TenzinCHW/Thorn.jl/issues
This project is licensed under GPLv3.
- Documentation
- Offload to FPGA