Can't create predictor on GPU device
bruno-ichec opened this issue · 1 comments
I've been trying to create predictors using the GPU, but can't seem to make it work. For example, using the provided test script (tests/pr3/test.go), the go script compiles and runs just fine as-is. However, if I change line 37:
mxnet.Device{mxnet.CPU_DEVICE, 0},
to:
mxnet.Device{mxnet.GPU_DEVICE, 0},
the code will compile, but fail to create the predictor, with the following error:
~/golang/src/github.com/songtianyi/go-mxnet-predictor/tests/pr3$ ./test
[14:13:20] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.9.4. Attempting to upgrade...
[14:13:20] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
panic: file exists
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x491de2]goroutine 1 [running]:
github.com/songtianyi/go-mxnet-predictor/mxnet.(*Predictor).Free(0x0, 0xc42009e000, 0xc8)
/home/bruno/golang/src/github.com/songtianyi/go-mxnet-predictor/mxnet/predictor.go:283 +0x22
panic(0x4ad4a0, 0xc4200180d0)
/usr/local/go/src/runtime/panic.go:502 +0x229
main.main()
/home/bruno/golang/src/github.com/songtianyi/go-mxnet-predictor/tests/pr3/test.go:46 +0x29c
This is with a CUDA enabled build of mxnet 1.2.0 and a K80 GPU. The mxnet examples (both python and C++) work fine and can use the GPU.
I presume there's something obvious I'm missing but I'm not too familiar with mxnet itself, so you might have a better insight. I can see that the mxnet.GPU_DEVICE effectively becomes the int value 2 through the enum, and that seems to be the value corresponding to the GPU device type in mxnet indeed, and GPU 0 is the only one available on the machine I'm using.
@bruno-ichec I know it's been a while, but just ran into this, did you ever solve this?