Skip to content

Latest commit

 

History

History
65 lines (50 loc) · 2.76 KB

README.md

File metadata and controls

65 lines (50 loc) · 2.76 KB

Neuro-Tokens

DrWatson install instructions

This code base is using the Julia Language and DrWatson to make a reproducible scientific project named

Neuro-Tokens

To (locally) reproduce this project, do the following:

  1. Download this code base. Notice that raw data are typically not included in the git-history and may need to be downloaded independently.
  2. Open a Julia console and do:
    julia> using Pkg
    julia> Pkg.add("DrWatson") # install globally, for using `quickactivate`
    julia> Pkg.activate("path/to/this/project")
    julia> Pkg.instantiate()
    

This will install all necessary packages for you to be able to run the scripts and everything should work out of the box, including correctly finding local paths.

We use the MEFK.jl library for all the simulations and fitting of data.

Estimating entropy with CDM in octave

First install octave. Then, start octave and innstall statistics, struct and parallel with

pkg install -forge statistics
pkg install -forge struct
pkg install -forge parallel

Data

I have used these two datasets for my experiments: pvc3 and this one from Lamberti.

Experiments and analysis

To run experiments on the above datasets, place their files into a data/exp_raw directory and run the train_network.jl file with the following command:

julia train_network.jl <binsz> <maxiter> <numsplit> <dev> <batchsize>

analyze.jl is for plotting histograms of hamming distances between memorized patterns and originals, among other stuff.

train_network.jl is the main file for running the fitting of MPN models with MEF for a given bin size and win size.

extract_blanche.jl and extract_joost.jl are for extracting data from the respective datasets into JLD files.

Heatmaps

For these experiments, binsz ranged from 500 to 6000 (in microseconds), maxiter was set to 100, numsplit was set to 1, dev is the device ID and batchsize was set to 10000. If GPU runs out of memory, please lower the batchsize variable or use smaller window sizes.

Entropy vs window size

We chose a fixed binsz and varied winsz from 1 to some large value (typically up to a timescale that we're interested in based on the chosen bin size).

Citations

Blanche, Tim (2009): Multi-neuron recordings in primary visual cortex. CRCNS.org. http://dx.doi.org/10.6080/K0MW2F2J Lamberti, M., Hess, M., Dias, I. et al. Maximum entropy models provide functional connectivity estimates in neural networks. Sci Rep 12, 9656 (2022). https://doi.org/10.1038/s41598-022-13674-4