GengDavid/pytorch-cpn

Train.py

Cronusf opened this issue · 2 comments

I ran train.py with an error that prompted me: Runtime Error: CUDA error: out of memory. I haven't made any changes at present. Why is this problem? How to solve it?

gydx@gydx-HP-Z6-G4-Workstation:~/A-YFT/pytorch-cpn/256.192.model$ python3 train.py
Initialize with pre-trained ResNet
successfully load 318 keys
/home/gydx/.local/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
Total params: 104.55MB

Epoch: 1 | LR: 0.00050000
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
.....................
File "/home/gydx/.local/lib/python3.5/site-packages/torch/nn/modules/upsampling.py", line 123, in forward
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
File "/home/gydx/.local/lib/python3.5/site-packages/torch/nn/functional.py", line 1985, in interpolate
return torch._C._nn.upsample_bilinear2d(input, _output_size(2), align_corners)
RuntimeError: CUDA error: out of memory
gydx@gydx-HP-Z6-G4-Workstation:~/A-YFT/pytorch-cpn/256.192.model$

This depends on the devices you're using. You can adjust batch_size in config.py to fit your devices

Seems the problem has been solved. Close this issue.