neurolib-dev/neurolib

Allow `continue_run` for `MultiModel`

jajcayn opened this issue · 0 comments

Currently, it throws an exception of NotImplementedError.
For a single-node model, it's easy - I've done this manually in notebooks.

For networks, it's more complicated. The initial state for MultiModel is a large vector as (number of state variables, time) where e.g. for 3 nodes, 2 state variables each this is (6, time). However, the model.state saved in neurolib for this case would be a dict of length 2 and each variable would be (3, time). Therefore we need to unpack this dict into a suitable format and assert the correct order of state variables as per node index.

Some implementation notes so I don't forget:

  • need to allow the time vector coming from MultiModel backend integration to start at arbitrary float, not at 0 (super easy, ~2 lines); also assert that final time vector model.t after a continued run is continuous without holes larger than sampling_dt
  • trigger recompilation for numba backend if the parameters changed during the continued run pipeline, e.g. different inputs (easy, just check the equivalence of params dicts and if not, then recompile, needs to be tested)
  • probably the implementation would start with if and if MultiModel is just a Node it's easy, when it's Network need to unpack the model.state dict
  • don't forget to test the new feature!

Yes, this would be a step closer to chunkwise integration on MultiModel