naoto0804/pytorch-inpainting-with-partial-conv

Running on CPU

Opened this issue · 5 comments

Is it possible to use CPU for training instead of CUDA GPU?

Principally, you can do it by replacing device = torch.device('cuda') to device = torch.device('cpu'), but it's extremely slow especially in training the model.
https://github.com/naoto0804/pytorch-inpainting-with-partial-conv/blob/master/train.py#L62

Ahh thank you so much. Thank you for sharing the code btw. This is by far the best and easiest to use out of all repos!

Principally, you can do it by replacing device = torch.device('cuda') to device = torch.device('cpu'), but it's extremely slow especially in training the model.
https://github.com/naoto0804/pytorch-inpainting-with-partial-conv/blob/master/train.py#L62

Traceback (most recent call last):
  File "E:\Programs\Miniconda3\envs\py37\lib\site-packages\torch\utils\data\
dataloader.py", line 511, in _try_get_batch
    data = self.data_queue.get(timeout=timeout)
  File "E:\Programs\Miniconda3\envs\py37\lib\multiprocessing\queues.py", lin
e 105, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 110, in <module>
    image, mask, gt = [x.to(device) for x in next(iterator_train)]
  File "E:\Programs\Miniconda3\envs\py37\lib\site-packages\torch\utils\data\
dataloader.py", line 576, in __next__
    idx, batch = self._get_batch()
  File "E:\Programs\Miniconda3\envs\py37\lib\site-packages\torch\utils\data\
dataloader.py", line 553, in _get_batch
    success, data = self._try_get_batch()
  File "E:\Programs\Miniconda3\envs\py37\lib\site-packages\torch\utils\data\
dataloader.py", line 519, in _try_get_batch
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.forma
t(pids_str))
RuntimeError: DataLoader worker (pid(s) 13708, 13816, 4588, 14068, 6768, 13380,
9600, 10924, 13540, 13924, 14024, 13776, 13796, 4104, 13832, 11968) exited unexp
ectedly

I tried running the train.py and this is what I got @naoto0804

Could you check if the dataset is correctly loaded? (ref: preprocess)

I'm not using places 2 dataset, but I edited the yml file and I've made sure that data_large, val_large, and test_large are inside the folder. I also changed the mask root to ./mask because the mask generator saved the mask in that folder instead