Amazing Semantic Segmentation on Tensorflow && Keras (include FCN, UNet, SegNet, PSPNet, PAN, RefineNet, DeepLabV3, DeepLabV3+, DenseASPP, BiSegNet ...)
The project supports these semantic segmentation models as follows:
- FCN-8s/16s/32s - Fully Convolutional Networks for Semantic Segmentation
- UNet - U-Net: Convolutional Networks for Biomedical Image Segmentation
- SegNet - SegNet:A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
- Bayesian-SegNet - Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
- PSPNet - Pyramid Scene Parsing Network
- RefineNet - RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation
- PAN - Pyramid Attention Network for Semantic Segmentation
- DeepLabV3 - Rethinking Atrous Convolution for Semantic Image Segmentation
- DeepLabV3Plus - Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
- DenseASPP - DenseASPP for Semantic Segmentation in Street Scenes
- BiSegNet - BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
The project supports these backbone models as follows, and your can choose suitable base model according to your needs.
- VGG16/19 - Very Deep Convolutional Networks for Large-Scale Image Recognition
- ResNet50/101/152 - Deep Residual Learning for Image Recognition
- DenseNet121/169/201/264 - Densely Connected Convolutional Networks
- MobileNetV1 - MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- MobileNetV2 - MobileNetV2: Inverted Residuals and Linear Bottlenecks
- Xception - Xception: Deep Learning with Depthwise Separable Convolutions
- Xception-DeepLab - Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
The project supports these loss functions:
- Cross Entropy
- Focal Loss
- MIoU Loss
- Self Balanced Focal Loss
original
- ...
The project supports these optimizers:
- SGD
- Adam
- Nadam
- AdamW
- NadamW
- SGDW
The project supports these learning rate schedule strategies:
- step decay
- poly decay
- cosine decay
- warm up
The folds of your dataset must satisfy the following structures:
|-- dataset
| |-- train
| | |-- images
| | |-- labels
| |-- valid
| | |-- images
| | |-- labels
| |-- test
| | |-- images
| | |-- labels
| |-- class_dict.csv
| |-- evaluated_classes
- Numpy
pip install numpy
- Pillow
pip install pillow
- OpenCV
pip install opencv-python
- Tensorflow
pip install tensorflow-gpu
Note: The recommended version of tensorflow-gpu is 1.14 or 2.0. And if your tensorflow version is lower, you need to modify some API or upgrade your tensorflow.
You can download the project through this command:
git clone git@github.com:luyanger1799/Amazing-Semantic-Segmentation.git
The project contains complete codes for training, testing and predicting. And you can perform a simple command as this to build a model on your dataset:
python train.py --model FCN-8s --base_model ResNet50 --dataset "dataset_path" --num_classes "num_classes"
The detailed command line parameters are as follows:
usage: train.py [-h] --model MODEL [--base_model BASE_MODEL] --dataset DATASET
[--loss {CE,Focal_Loss}] --num_classes NUM_CLASSES
[--random_crop RANDOM_CROP] [--crop_height CROP_HEIGHT]
[--crop_width CROP_WIDTH] [--batch_size BATCH_SIZE]
[--valid_batch_size VALID_BATCH_SIZE]
[--num_epochs NUM_EPOCHS] [--initial_epoch INITIAL_EPOCH]
[--h_flip H_FLIP] [--v_flip V_FLIP]
[--brightness BRIGHTNESS [BRIGHTNESS ...]]
[--rotation ROTATION]
[--zoom_range ZOOM_RANGE [ZOOM_RANGE ...]]
[--channel_shift CHANNEL_SHIFT]
[--data_aug_rate DATA_AUG_RATE]
[--checkpoint_freq CHECKPOINT_FREQ]
[--validation_freq VALIDATION_FREQ]
[--num_valid_images NUM_VALID_IMAGES]
[--data_shuffle DATA_SHUFFLE] [--random_seed RANDOM_SEED]
[--weights WEIGHTS]
optional arguments:
-h, --help show this help message and exit
--model MODEL Choose the semantic segmentation methods.
--base_model BASE_MODEL
Choose the backbone model.
--dataset DATASET The path of the dataset.
--loss {CE,Focal_Loss}
The loss function for traing.
--num_classes NUM_CLASSES
The number of classes to be segmented.
--random_crop RANDOM_CROP
Whether to randomly crop the image.
--crop_height CROP_HEIGHT
The height to crop the image.
--crop_width CROP_WIDTH
The width to crop the image.
--batch_size BATCH_SIZE
The training batch size.
--valid_batch_size VALID_BATCH_SIZE
The validation batch size.
--num_epochs NUM_EPOCHS
The number of epochs to train for.
--initial_epoch INITIAL_EPOCH
The initial epoch of training.
--h_flip H_FLIP Whether to randomly flip the image horizontally.
--v_flip V_FLIP Whether to randomly flip the image vertically.
--brightness BRIGHTNESS [BRIGHTNESS ...]
Randomly change the brightness (list).
--rotation ROTATION The angle to randomly rotate the image.
--zoom_range ZOOM_RANGE [ZOOM_RANGE ...]
The times for zooming the image.
--channel_shift CHANNEL_SHIFT
The channel shift range.
--data_aug_rate DATA_AUG_RATE
The rate of data augmentation.
--checkpoint_freq CHECKPOINT_FREQ
How often to save a checkpoint.
--validation_freq VALIDATION_FREQ
How often to perform validation.
--num_valid_images NUM_VALID_IMAGES
The number of images used for validation.
--data_shuffle DATA_SHUFFLE
Whether to shuffle the data.
--random_seed RANDOM_SEED
The random shuffle seed.
--weights WEIGHTS The path of weights to be loaded.
If you only want to use the model in your own training code, you can do as this:
from builders.model_builder import builder
model, base_model = builder(num_classes, input_size, model='SegNet', base_model=None)
Note: If you don't give the parameter "base_model", the default backbone will be used.
Similarly, you can evaluate the model on your own dataset:
python test.py --model FCN-8s --base_model ResNet50 --dataset "dataset_path" --num_classes "num_classes" --weights "weights_path"
Note: If the parameter "weights" is None, the weigths saved in default path will be loaded.
You can get the prediction of a single RGB image as this:
python predict.py --model FCN-8s --base_model ResNet50 --num_classes "num_classes" --weights "weights_path" --image_path "image_path"
If you already have the predictions of all test images or you don't want to evaluate all classes, you can do as this:
python evaluate.py --dataset 'dataset_path' --predictions 'prediction_path'
Note: You must specify the class to be evaluated in dataset/evaluated_classes.txt
.
Alternatively, you can install the project through PyPI.
pip install semantic-segmentation
And you can use model_builders to build different models or directly call the class of semantic segmentation.
from semantic_segmentation import model_builders
net, base_net = model_builders(num_classes, input_size, model='SegNet', base_model=None)
or
from semantic_segmentation import models
net = models.FCN(num_classes, version='FCN-8s')(input_size=input_size)
Due to my limited computing resources, there is no pre-training model yet. And maybe it will be added in the future.
If you like this work, please give me a star! And if you find any errors or have any suggestions, please contact me.
Email: luyanger1799@outlook.com