mblondel/soft-dtw

Greedy speedup

Opened this issue · 2 comments

Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).

Would it be possible to add a parameter for greedy scoring which would scale better in time?

For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?

I think the right way to do it would be to add a band constraint, as done in Fast Global Alignment Kernels by @marcocuturi. This would allow to only compute distances for pairs of observations not too far from the diagonal. This should be fairly straightforward but we haven't got around to doing it yet.

@ghannum You will probably be interested in PR #9.