One critical factor limiting the size of neural cognitive models is the time required to simulate such models. To reduce simulation time, specialized hardware is often used. However, such hardware can be costly, not readily available, or require specialized software implementations that are difficult to maintain. Here, we present an algorithm that optimizes the computational graph of the Nengo neural network simulator, allowing simulations to run more quickly on commodity hardware. This is achieved by merging identical operations into single operations and restructuring the accessed data in larger blocks of sequential memory. In this way, a time speed-up of up to 6.8 is obtained. While this does not beat the specialized OpenCL implementation of Nengo, this optimization is available on any platform that can run Python. In contrast, the OpenCL implementation supports fewer platforms and can be difficult to install.
To run the source code Python is required. Benchmarks were run with Python 3.4.2, while data analysis and plotting code was run with Python 3.6.1. The code might run as well with Python 2.7, but this was not tested.
The complete list of dependencies to run all parts is given in
requirements.txt
with the exact versions used.
It is best to use a newly created virtualenv for the installation.
git clone --recursive https://github.com/ctn-archive/gosmann-frontiers2017.git
cd gosmann-frontiers2017
pip install -r requirements.txt
pip install .
cd spaun2.0/_spaun/arms/three_link
pip install .
To run the benchmarks run psy-doit
from the root folder of the project. This
will take quite some time (up to a few days). The data will be stored in the
files
psy-work/memory/result.npz
,psy-work/memory_spaun/result.npz
,psy-work/time_cconv/result.npz
,psy-work/time_nback/result.npz
,psy-work/time_spaun/result.npz
.
Precomputed time_*/result.npz
data files can be found in the data
folder.
The memory data files are not included due to their larger size. To just
generate the memory data files you can run psy-doit memory memory_spaun
.
The reduction in operators for a model can be printed with python scripts/log_reduction.py <model>
where model can be one of circ_conv
,
lorenz
, nback
, or spaun
. This script supports additional arguments like
--neuron-type
. A list of all arguments can be printed with python scripts/log_reduction.py -h
.
The data
folder contains text files with the output for different models with
different neurons types.
The total number of neurons in a model can be printed with python scripts/n_neurons <model>
.
The analysis and plotting code is contained in Jupyter notebooks and can be opened with:
cd notebooks
jupyter notebook
The notebooks require the corresponding data files in the data
folder. The
time data is contained in this repository, but the memory data needs to be
generated first and copied to that directory.