pytorch-resnet
This repository contains codes for converting resnet50,101,152 trained in caffe to pytorch model.
First, you need to have pycaffe and pytorch. Secondly, you should download the caffe models from https://github.com/KaimingHe/deep-residual-networks. Put them in data folder.
Then,
python convert.py
or
python convert2.py
The models generated by convert.py
expect different preprocessing than the other models in the PyTorch model zoo. Images should be in BGR format in the range [0, 255], and the following BGR values should then be subtracted from each pixel: [103.939, 116.779, 123.68].
The models generated by convert2.py
expect RGB image ranging [0, 1]. You can use standard trn.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
.
Converted models (converted from convert.py, used in my pytorch-faster-rcnn repo)
resnet50-caffe: https://drive.google.com/open?id=0B7fNdx_jAqhtRkZ0ODVuWUd3Q3c
resnet101-caffe: https://drive.google.com/open?id=0B7fNdx_jAqhtcnBDY3FlRk1Yb2c
resnet152-caffe: https://drive.google.com/open?id=0B7fNdx_jAqhtbVp2SlZhUkhlOTg
Converted models (converted from convert2.py, used in my neuraltalk2-pytorch repo)
resnet50: https://drive.google.com/uc?export=download&id=0B7fNdx_jAqhtam1MSTNSYXVYZ2s
resnet101: https://drive.google.com/uc?export=download&id=0B7fNdx_jAqhtSmdCNDVOVVdINWs
resnet152: https://drive.google.com/uc?export=download&id=0B7fNdx_jAqhtckNGQ2FLd25fa3c
Note
This model is different from what's in pytorch model zoo. Although you can actually load the parameters into the pytorch resnet, the strucuture of caffe resnet and torch resnet are slightly different. The structure is defined in the resnet.py. (The file is almost identical to what's in torchvision, with only some slight changes.)
Acknowledgement
A large part of the code is borrowed from https://github.com/ry/tensorflow-resnet