https://arxiv.org/abs/2205.10563
@misc{yamamoto_modular_2022,
title = {Modular Architecture Facilitates Noise-Driven Control of Synchrony in Neuronal Networks},
author = {Yamamoto, Hideaki and Spitzner, F. Paul and Takemuro, Taiki and Buend{\'i}a, Victor and Morante, Carla and Konno, Tomohiro and Sato, Shigeo and {Hirano-Iwata}, Ayumi and Priesemann, Viola and Mu{\~n}oz, Miguel A. and Zierenberg, Johannes and Soriano, Jordi},
year = {2022},
eprint = {2205.10563},
eprinttype = {arxiv},
primaryclass = {q-bio},
institution = {{arXiv}},
doi = {10.48550/arXiv.2205.10563},
url = {http://arxiv.org/abs/2205.10563},
archiveprefix = {arXiv},
}
Raw data, preprocessed recordings and rendered movies can be downloaded here (Note: experimental data will be published after acceptance.)
All dependencies should be installed automatically if you create a new conda environment from the repository directory.
conda env create --file environment.yml
conda activate modular_cultures
We provide interactive notebooks which serve as a good starting point, they showcase the models and link which parts of the code lead to which plot. (Note: refactoring of backed for Revision 1 needed)
The repo is structured into folders as follows:
-
src/
contains the models for all simulations. In particular:- Merged and modular topologies using axon growth algorithm proposed by Orlandi et al.
- Leaky integrate and fire neuron dynamics that run in Brian2 to generate artificial spiking data.
- The mesoscopic model where modules are treated as the smallest spacial unit.
-
run/
contains helpers to run the simulations fromsrc
on a cluster, and to create parameter combinations / sweeps that can be computed in serial. -
ana/
contains all post-processing. Frontmost:paper_plots.py
is the high-level wrapper that creates our figures.process_conditions.py
analyses the raw spiking data from either experiments or simulations to create before-after comparisonsndim_merge.py
analyses simulation data, sweeping different input strength.ana_helper.py
contains all the low-level analysis functions that operate on spike times, importers of simulation and experimental data and much more.- All analysis work on a nested dictionary, mostly called
h5f
. Each of those corresponds to one trial, and we attach all analysis results to this dict. - The idea is that later functions (such as plotting) can use previous results. Note that this is done live, and that although we "add" to the dictionaries, nothing is written to disk.
- All analysis work on a nested dictionary, mostly called
plot_helper.py
provides various functions that take such ah5f
dict and plot some aspect of it with consistent styling. Examples include timeseries, rasters, and distributions.