Maghoumi/pytorch-softdtw-cuda
Fast CUDA implementation of (differentiable) soft dynamic time warping for PyTorch
PythonMIT
Issues
- 2
It is support time-serise Classification task?
#34 opened by CAI23sbP - 4
why i get a negative loss value?
#32 opened by FZH1802 - 1
Grid size (4) < 2 * SM count (164) will likely result in GPU under utilization due to low occupancy.
#21 opened by 18445864529 - 2
output the alignments of two sequences
#31 opened by XIDIANPQZ - 0
- 0
NvvmSupportError: libNVVM cannot be found.
#28 opened by zxg-code - 0
sympy error
#29 opened by gg4u - 1
Negative value
#27 opened by NingkangYang - 3
CUDA_ERROR_INVALID_VALUE
#26 opened by Cram3r95 - 0
- 2
package in pypi or conda
#20 opened by toinsson - 1
Regarding GPU memory footprint
#19 opened by netw0rkf10w - 1
- 2
GPU mem still be affected when using CPU Soft DTW
#13 opened by v-nhandt21 - 2
Compare with Cython implement
#12 opened by v-nhandt21 - 7
Example fails on GPU
#14 opened by weidenka - 1
- 1
can DTW be negative?
#10 opened by yongjunshin - 0
AssertionError when dims increase
#15 opened by tricky61 - 2
Batch of variable sequence length
#11 opened by daidedou - 4
- 1
loss drop below 0?
#9 opened by dyt0414 - 2
- 2
- 1
- 3
Value for bandwitdh pruning?
#4 opened by hadaev8 - 3
- 3
Steps chosen / choice penalty
#2 opened by cfrancesco - 2
*obj* doesn't implement the cuda array interface. at cuda.as_cuda_array(D.detach())
#1 opened by KimUyen