nshaud/DeepNetsForEO

Operation on cpu

FideliusC opened this issue · 5 comments

Hi,
@nshaud
what changes should be made to use the CPU rather than the GPU? You have to remove the calls to cuda () definitely, but with what to replace them?
Could you explicitly define all the lines to be modified to work on cpu?

Thanks in advance

@FideliusC don't replace them with anything. You can remove all xxx.cuda() operations. By default, PyTorch operates on CPU. I will push an updated version of this notebook doing this automatically with a simple flag in a few weeks, if you're ready to wait.

How long would it take train the network on a CPU? I am running your Segnet_PyTorch notebook and I have a GTX 1070 GPU, but do not have enough memory for the training part of the code. Is it possible to download a pre-trained model, or modify the code to train (albeit sub-optimised) for memory constraints?

@ferhat00 A least a few days, at worst probably a few weeks. With a GTX 1070, you should be able to train SegNet without trouble.

The two main factors impacting GPU usage are the batch size and the window size. Depending on the resolution of your images, you can go as low as 128x128 window size, and the network trains fine with a batch size of 5. On my setup, 256x256 with a batch size of 10 takes less than 6Gb so you should be fine.

@nshaud Thanks. I tried it with 128x128 window size, batch size 5 and it executes perfectly, getting 88.5% total accuracy. Now I have a trained the model, I would like to try to see its predictions on a new, independent image, say aerial photography from the USGS explorer website. Have you tried something like this with your model and code?

@ferhat00 You can modify the test() function to work on arbitrary images. Depending on the training dataset, your mileage may vary.