Any way of not using GPU optimization?
cambraca opened this issue · 5 comments
I'm trying to make this run on an Ubuntu VM, and I get a no CUDA-capable device is detected
error. Is there any way to not use the GPU?
Running on CPU is possible with the newly-added VGG model. I won't be able to get to it until later today or tomorrow, but in the meantime the fix is pretty simple: just comment out the :cuda()
calls.
Added CPU support.
I still can't run it. I tried commenting out cuda() calls, but I still get the "no CUDA-capable device is detected at .../torch/extra/cutorch/lib/THC/THCGeneral.c:16". I'm using Virtualbox, with Ubuntu 15.04. Any ideas?
I finally made it work doing this: (or so I think, it's taking ages to run..)
$ git diff
diff --git a/main.lua b/main.lua
index 566c9a2..1748a1d 100644
--- a/main.lua
+++ b/main.lua
@@ -6,9 +6,9 @@
--
require 'torch'
-require 'cutorch'
+--require 'cutorch'
require 'nn'
-require 'cunn'
+--require 'cunn'
require 'image'
require 'paths'
require 'optim'
Fixed in the latest commit. You can verify that it's running correctly by using a small image size (--size 200
) and displaying every update (--display_interval 1
).