Basic_CNNs_TensorFlow2
A tensorflow2 implementation of some basic CNNs.
Networks included:
- MobileNet_V1
- MobileNet_V2
- MobileNet_V3
- EfficientNet
- ResNeXt
- InceptionV4, InceptionResNetV1, InceptionResNetV2
- SE_ResNet_50, SE_ResNet_101, SE_ResNet_152
- SqueezeNet
- DenseNet
- ShuffleNetV2
- ResNet
Other networks
For AlexNet and VGG, see : https://github.com/calmisential/TensorFlow2.0_Image_Classification
For InceptionV3, see : https://github.com/calmisential/TensorFlow2.0_InceptionV3
For ResNet, see : https://github.com/calmisential/TensorFlow2.0_ResNet
Train
- Requirements:
- Python >= 3.6
- Tensorflow == 2.0.0
- To train the network on your own dataset, you can put the dataset under the folder original dataset, and the directory should look like this:
|——original dataset
|——class_name_0
|——class_name_1
|——class_name_2
|——class_name_3
- Run the script split_dataset.py to split the raw dataset into train set, valid set and test set. The dataset directory will be like this:
|——dataset
|——train
|——class_name_1
|——class_name_2
......
|——class_name_n
|——valid
|——class_name_1
|——class_name_2
......
|——class_name_n
|—-test
|——class_name_1
|——class_name_2
......
|——class_name_n
- Run to_tfrecord.py to generate tfrecord files.
- Change the corresponding parameters in config.py.
- Run train.py to start training.
If you want to train the EfficientNet, you should change the IMAGE_HEIGHT and IMAGE_WIDTH to resolution in the params, and then run train.py to start training.
Evaluate
Run evaluate.py to evaluate the model's performance on the test dataset.
Different input image sizes for different neural networks
Type | Neural Network | Input Image Size (height * width) |
---|---|---|
MobileNet | MobileNet_V1 | (224 * 224) |
MobileNet_V2 | (224 * 224) | |
MobileNet_V3 | (224 * 224) | |
EfficientNet | EfficientNet(B0~B7) | / |
ResNeXt | ResNeXt50 | (224 * 224) |
ResNeXt101 | (224 * 224) | |
Inception | InceptionV4 | (299 * 299) |
Inception_ResNet_V1 | (299 * 299) | |
Inception_ResNet_V2 | (299 * 299) | |
SE_ResNet | SE_ResNet_50 | (224 * 224) |
SE_ResNet_101 | (224 * 224) | |
SE_ResNet_152 | (224 * 224) | |
SqueezeNet | SqueezeNet | (224 * 224) |
DenseNet | DenseNet_121 | (224 * 224) |
DenseNet_169 | (224 * 224) | |
DenseNet_201 | (224 * 224) | |
DenseNet_269 | (224 * 224) | |
ShuffleNetV2 | ShuffleNetV2 | (224 * 224) |
ResNet | ResNet_18 | (224 * 224) |
ResNet_34 | (224 * 224) | |
ResNet_50 | (224 * 224) | |
ResNet_101 | (224 * 224) | |
ResNet_152 | (224 * 224) |
References
- MobileNet_V1: Efficient Convolutional Neural Networks for Mobile Vision Applications
- MobileNet_V2: Inverted Residuals and Linear Bottlenecks
- MobileNet_V3: Searching for MobileNetV3
- EfficientNet: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- The official code of EfficientNet: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
- ResNeXt: Aggregated Residual Transformations for Deep Neural Networks
- Inception_V4/Inception_ResNet_V1/Inception_ResNet_V2: Inception-v4, Inception-ResNet and the Impact of Residual Connectionson Learning
- The official implementation of Inception_V4: https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py
- The official implementation of Inception_ResNet_V2: https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py
- SENet: Squeeze-and-Excitation Networks
- SqueezeNet: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
- DenseNet: Densely Connected Convolutional Networks
- https://zhuanlan.zhihu.com/p/37189203
- ShuffleNetV2: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
- https://zhuanlan.zhihu.com/p/48261931
- ResNet: Deep Residual Learning for Image Recognition