This repository uses Pytorch to implement the popular CNN architectures, using the dataset CIFAR. The following is the reference paper:
- (lenet)LeNet-5, convolutional neural networks
- (alexnet) ImageNet Classification with Deep Convolutional Neural Networks
- (vgg) Very Deep Convolutional Networks for Large-Scale Image Recognition
- (resnet) Deep Residual Learning for Image Recognition
- (preresnet) Identity Mappings in Deep Residual Networks
- (resnext) Aggregated Residual Transformations for Deep Neural Networks
- (densenet) Densely Connected Convolutional Networks
- (senet) Squeeze-and-Excitation Networks
- (bam) BAM: Bottleneck Attention Module
- (cbam) CBAM: Convolutional Block Attention Module
- (genet) Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
- (sknet) SKNet: Selective Kernel Networks
- (mobilenetV1)MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- (mobilenetV2)MobileNetV2: Inverted Residuals and Linear Bottlenecks
- (shake-shake) Shake-Shake regularization
- (cutout) Improved Regularization of Convolutional Neural Networks with Cutout
- (mixup) mixup: Beyond Empirical Risk Minimization
- (cos_lr) SGDR: Stochastic Gradient Descent with Warm Restarts
- (htd_lr) Stochastic Gradient Descent with Hyperbolic-Tangent Decay on Classification
- Python >= 3.5
- PyTorch = 0.4 or 1.0
- Tensorboard (if you want to use the tensorboard for visualization)
- pyyaml, easydict, tensorboardX
Run the command for training as following:
##1 GPU for lenet
python -u train.py --work-path ./experiments/cifar10/lenet
##resume from checkpoint
python -u train.py --work-path ./experiments/cifar10/lenet --resume
##2 GPUs for resnet1202
CUDA_VISIBLE_DEVICES=0,1 python -u train.py --work-path ./experiments/cifar10/preresnet20
##4 GPUs for densenet190bc
CUDA_VISIBLE_DEVICES=0,1,2,3 python -u train.py --work-path ./experiments/cifar10/densenet100bc
Feel free to contact me if you have any suggestions or questions, issues are welcome, create a PR if you find any bugs or you want to contribute.:smile: