SpiNNakerManchester/sPyNNaker8

A convolution connector

Closed this issue · 7 comments

Some users (including Simon and Peter, but not just them) want a connector between populations where the (initial) weights of the individual connections are defined by a convolution kernel. It'd be a reasonable candidate for generating on machine too, though that'd require passing information about how to locate the neurons in space. (pyNN has stuff for that, but we don't currently make good use of it.)

The DistanceDependentProbabilityConnector can't do this at all; it lacks enough information.
The DisplacementDependentProbabilityConnector (which we don't support) probably could, but it (probably) can't be easily hoisted to be generated on the machine.
A specialized connector would be much better in terms of user-perceived performance.

A possible workaround for the "space" requirements is to simply define the connector as working in a 2D space defined as the arguments to the connector e.g. width, height, convolution_width, convolution_height. Of course it would have to raise an exception if the number of source neurons is not equal to width x height or if the target was not the "correct size" (to be determined as I can't think how this would be defined just now).

I would not recommend using the DisplacementDependentProbabilityConnector if it does not do exactly what is required also because if we are going to write a new Connector lets write one we have an actual use case for and not one where the uncases must be so low there was not even a need to document it fully.

To add to this, it's not clear that any other pyNN simulator actually uses (or even implements) the DisplacementDependentProbabilityConnector. I can't see it on a first glance at the NEST source code, for exmaple. Something we ought to ask Andrew Davison, I guess...

The source is a little clearer. It could implement a convolution kernel… if one was to write sufficiently convoluted code, of course. And there's no way we'll ever manage to safely hoist it onto the machine, as the way it is currently written doesn't guarantee that the delta vectors in the matrix are kept separate.

I think it would be far better to ignore what pyNN does and do our own. We actually have a use case.

A start was made on this (and on-machine as well): https://github.com/SpiNNakerManchester/sPyNNaker/blob/synapse_expander_on_chip_kernel_connector/spynnaker/pyNN/models/neural_projections/connectors/kernel_connector.py so it might not take too much effort for someone to look at that at some point soon.

The KernelConnector was merged in SpiNNakerManchester/sPyNNaker#645 so this can be closed for now.