deepgram/kur

CUDA_ERROR_NO_DEVICE

mahi19 opened this issue · 1 comments

after changing my backend to tensorflow gpu following error is popping :

c:\kur-master\examples>kur train speech.yml
←[1;33m[WARNING 2018-01-12 02:19:14,252 kur.supplier.speechrec:465]←[0m Inferring vocabulary from data
 set.←[0m
←[1;33m[WARNING 2018-01-12 02:19:21,490 kur.supplier.speechrec:465]←[0m Inferring vocabulary from data
 set.←[0m
     Total wall-clock time: 607h 49m 33s
  Training wall-clock time: 591h 00m 23s
Validation wall-clock time: 16h 45m 43s
     Batch wall-clock time: 590h 55m 12s
2018-01-12 02:19:51.189738: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stre
am_executor\cuda\cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE
Python package "magic" could not be loaded, possibly because system library "libmagic" could not be found. We are falling back on our own heuristics.
Python package "magic" could not be loaded, possibly because system library "libmagic" could not be found. We are falling back on our own heuristics.    

Although tensorflow is detecting gpu, it is somehow not getting detected in kur :

also updated skip_check in backend section but it doesn't seem to work.

` ```

Setting up the backend.

backend:
name: keras
backend: tensorflow
skip_check: yes

`
I would be thankful if someone could help.



Kur does not support windows at all. Cuda support in uitls/cuda.py is hard coded for linux...