SelfishGene/neuron_as_deep_net

neuron module not found

Closed this issue · 5 comments

Hi,

The neuron module is referenced from file : simulate_L5PC_and_create_dataset.py, is this a pypi module or a file which is left out?

Best regards,
C

image

neuron is indeed a pypi module of the NEURON simulation environment

neuron github repo: https://github.com/neuronsimulator/nrn
recommended introductory NEURON tutorial: https://github.com/orena1/NEURON_tutorial
official NEURON with python tutorial: https://neuron.yale.edu/neuron/static/docs/neuronpython/index.html
NEURON help fortum: https://www.neuron.yale.edu/phpBB/index.php?sid=31f0839c5c63ca79d80790460542bbf3

Thanks :) If it is possible, may I ask a clarification on the paper here: This is hard to tell before fully understanding the code, I also do not have access to the paper...(no institution account setup yet), but how exactly is the simulation done, so to ensure that the complexity of the input-output function of the "single neuron" isn't predominantly from the input, as well as, isn't predominantly from the task setting. That is, one might expect a single neuron from a deep artificial neuralnetwork with many inputs to exhibit "high complexity" if its output pattern is compared to that of a 2D image representation of its temporal sequence of inputs/activations and then a convolutional neuralnetwork is applied to recognize it. (I assume this setting is what is done here in this paper? ).
Also, by assuming complexity as related to the least "complexity" of a deep neuralnetwork to properly fit it assumes that the deep neuralnetwork to be a good and efficient representation of that function in the first place. Take for instance, fitting a sinewave with a neuralnet is not as easy as fitting a sinewave with the sinewave definition function - Asin(theta+omega). In other words perhaps the biological neuron is simply defined by strange non-linear functions perhaps due to biological constraint, and thereby exhibit redundant complexity when it is fitted by deep neural nets.

The complexity of a neuron can also be the role it plays amongst an entire circuitry, examples are that of the interpretation of neuralnetworks by finding special neurons that "encode abstract concepts", like the Halle Barry Neuron (https://phys.org/news/2005-06-single-cell-recognition-halle-berry-neuron.html), this effect is found too in current deep artificial neuralnetworks (https://openai.com/blog/multimodal-neurons/). Have you had any inquiry into circuitry, or other levels of complexity?

Perhaps these questions are taken account of in discussions in the paper, which I unfortunately do not have access to, is it possible to request access from you?

Many thanks and best regards,
C

I also do not have access to the paper...(no institution account setup yet)

The open access version of this paper is listed in README:

Open Access (slightly older) version of Paper: https://www.biorxiv.org/content/10.1101/613141v2

Thanks!

Hi @IpsumDominum,
You have lots of impressions and questions before reading the paper.
I strongly suggest you first read the paper, I believe a large fraction of your questions will be answered

As @tuliren mentioned, there is an open access version of slightly older version of the paper
Also, during this September the following link should grant free access to the newer version of the paper as well:
https://authors.elsevier.com/a/1dYq23BtfGzkFQ (this link is also available in the README)