Allow `continue_run` for `MultiModel`
jajcayn opened this issue · 0 comments
jajcayn commented
Currently, it throws an exception of NotImplementedError
.
For a single-node model, it's easy - I've done this manually in notebooks.
For networks, it's more complicated. The initial state for MultiModel
is a large vector as (number of state variables, time)
where e.g. for 3 nodes, 2 state variables each this is (6, time)
. However, the model.state
saved in neurolib for this case would be a dict
of length 2 and each variable would be (3, time)
. Therefore we need to unpack this dict
into a suitable format and assert the correct order of state variables as per node index.
Some implementation notes so I don't forget:
- need to allow the time vector coming from
MultiModel
backend integration to start at arbitrary float, not at 0 (super easy, ~2 lines); also assert that final time vectormodel.t
after a continued run is continuous without holes larger thansampling_dt
- trigger recompilation for
numba
backend if the parameters changed during the continued run pipeline, e.g. different inputs (easy, just check the equivalence of params dicts and if not, then recompile, needs to be tested) - probably the implementation would start with
if
and ifMultiModel
is just aNode
it's easy, when it'sNetwork
need to unpack themodel.state
dict - don't forget to test the new feature!
Yes, this would be a step closer to chunkwise integration on MultiModel