philipperemy/keras-tcn

Reuse computations for online testing

Closed this issue · 3 comments

Hi! Let's say that we want to use this TCN implementation in a real system that produces data each time-step. To get real-time results we make one prediction for each time-step, that means, one forward pass with the last data obtained along with the previous ones up to the TCN recepteive field/memory.

If we repeat this for each time-step, we are also repeating the computation of many convolution operations from the past.
Is there any way to save the results of past computations to be reused in future steps? This would speed up the online computations achieving a faster real-time processing.

Thanks in advance.

@AlbertoSabater yes it's not possible at the moment. But it definitely makes sense. Related to #69

If the input data all shifts one index, I'm not sure which convolutions stay the same... Seems to me like none of them will result in the same output.

It really makes sense. Even though all TCN implementations discuss the relationship between real-time and causal convolution, there is no such real-time version. I find the implementation is really challenging but it is possible in theory. The main concern is the efficiency since the whole sequence input can take advantage of parallelism but the one-by-one input design has less parallelism.