dimatura/voxnet

Running Voxnet on CPU

Opened this issue · 0 comments

Is it possible to run the code on CPU only? I have gone through the issues, and have been able to fix most of the problems I have been facing. At last, the following is the command I am using to train the ShapeNet model:

THEANO_FLAGS='device=cpu,force_device=True,floatX=float32' python train.py config/shapenet10.py ../shapenet10_train.tar

I have CUDA installed in /usr/local/cuda, and the PATH and LD_LIBRARY_PATH variables have been updated to have /usr/local/cuda/bin and /usr/local/cuda/lib64 respectively.

When I run the command though, it keeps failing with:
2017-08-26 15:28:06,904 INFO| Metrics will be saved to metrics.jsonl
2017-08-26 15:28:06,904 INFO| Compiling theano functions...
Traceback (most recent call last):
File "train.py", line 180, in
main(args)
File "train.py", line 132, in main
tfuncs, tvars = make_training_functions(cfg, model)
File "train.py", line 28, in make_training_functions
out = lasagne.layers.get_output(l_out, X)
File "/home/ubuntu/.local/lib/python2.7/site-packages/lasagne/layers/helper.py", line 185, in get_output
all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
File "/home/ubuntu/sandboxes/voxnet/voxnet/layers.py", line 225, in get_output_for
activation = conved + self.b.dimshuffle('x', 0, 'x', 'x', 'x')
File "/home/ubuntu/.local/lib/python2.7/site-packages/theano/tensor/var.py", line 128, in add
return theano.tensor.basic.add(self, other)
File "/home/ubuntu/.local/lib/python2.7/site-packages/theano/gof/op.py", line 507, in call
node = self.make_node(*inputs, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/theano/tensor/elemwise.py", line 527, in make_node
inputs = map(as_tensor_variable, inputs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/theano/tensor/basic.py", line 145, in as_tensor_variable
return x._as_TensorVariable() # TODO: pass name and ndim arguments
File "/home/ubuntu/.local/lib/python2.7/site-packages/theano/sandbox/cuda/var.py", line 30, in _as_TensorVariable
return HostFromGpu()(self)
NameError: global name 'HostFromGpu' is not defined

In the above, Device is CPU and force_device is True. If I set force_device to false, then it fails to retrieve any GPUs - probably because I did not follow instructions to enable GPU usage at all. I am first trying to get it working on CPU, since that seems a simpler thing to do.

Can someone help?