/Tensorflow-Keras-Semantic-Segmentation

Tensorflow-Keras semantic segmentation

Primary LanguagePythonMIT LicenseMIT

End-to-End Semantic Segmentation

All about Tensorflow/Keras semantic segmentation

Tensorflow/Keras based semantic segmentation repository


Github All Releases

Hits


Python


main_image_1

Cityscapes Image segmentation results (with ignore index)

166

Cityscapes Image segmentation result (without ignore index)

Supported options

  • Data preprocessing
  • Train
  • Evaluate
  • Predict real-time
  • TensorRT Converting
  • Tensorflow docker serving

Use library

  • Tensorflow
  • Tensorflow-datasets
  • Tensorflow-addons
  • Tensorflow-serving
  • Keras
  • OpenCV python
  • gRPC

Options: Distribute training, Custom Data

Models: DDRNet-23-Slim, Eff-DeepLabV3+, Eff-DeepLabV3+(light-weight), MobileNetV3-DeepLabV3+



Table of Contents

5. Eval



1. Models

These are the currently supported models and loss functions.
Regular additional updates will be made.

Latest update : 2022/07/28


Model name

Params

Resolution(HxW)

Inference time(ms)

Pretrained weights

Lightweight EFF-DLV3+ 20m 1024x2048 30 TODO
DeepLabV3+ 48m 1024x2048 TODO TODO
DDRNet-23-slim 0.5m 640x480 20ms TODO

Loss

Loss

Implementation

Cross entropy loss OK
Focal cross entropy loss OK
Binary cross entropy loss OK
Focal binary cross entropy loss OK
Jaccard loss TODO
Dice loss TODO


2. Dependencies

The dependencies of this repository are:

OS Ubuntu 18.04
TF version 2.9.1
Python version 3.8.13~
CUDA 11.1~
CUDNN cuDNN v8.1.0 , for CUDA 11.1
TensorRT version 7.2.2.3
Docker Docker 20.10.17


Download the package from the Anaconda (miniconda) virtual environment for training and evaluation.

conda create -n envs_name python=3.8

pip install -r requirements.txt


3. Preparing datasets

The Dataset required by the program uses the Tensorflow Datasets library TFDS.


Custom dataset labeling process

Custom data image labeling was done using a tool called CVAT (https://github.com/openvinotoolkit/cvat).

After the labeling operation is completed, the export format of the dataset is created in CVAT as Segmentation mask 1.1 format.

You can check the RGB values for each class in labelmap.txt of the created dataset.

  • How to Use?
    1. Label semantic data (mask) using CVAT tool
    2. Raw data augmentation
      • Image shift
      • Image blurring
      • Image rotate
      • Mask area image conversion..etc

First, for images without a foreground, CVAT does not automatically create a label. Assuming there is no foreground object as shown below, a zero label is created.

cd data_augmentation
python make_blank_label.py

Second,

python augment_data.py

Perform augmentation by specifying the path to the options below.

--rgb_path RGB_PATH   raw image path
--mask_path MASK_PATH
                        raw mask path
--obj_mask_path OBJ_MASK_PATH
                        raw obj mask path
--label_map_path LABEL_MAP_PATH
                        CVAT's labelmap.txt path
--bg_path BG_PATH     bg image path, Convert raw rgb image using mask area
--output_path OUTPUT_PATH
                        Path to save the conversion result

Caution!

You can choose which options to augment directly in main: at the bottom of the code. Modify this section to suit your preferred augmentation method.


Convert TFDS dataset

We use the tensorflow datasets library to convert the generated semantic labels into tf.data format.


Move the augmented RGB image and semantic label saved image to the folder below.


└── dataset 
    ├── rgb/  # RGB image.
    |   ├── image_1.png 
    |   └── image_2.png
    └── gt/  # Semantic label.    
        ├── image_1_mask.png 
        └── image_2_output.png

Compress that directory to 'full_semantic.zip'.

zip full_semantic.zip ./*

When the compression is complete, it should be set like the corresponding path.

└──full_semantic.zip
    ├── rgb/  # RGB image.
    |   ├── image_1.png 
    |   └── image_2.png
    └── gt/  # Semantic label.    
        ├── image_1_mask.png 
        └── image_2_output.png

Then, move full_semantic.zip after creating a folder structure like the one below.

/home/$USER/tensorflow_datasets/downloads/manual

Finally, build the dataset.

cd hole-detection/full_semantic/
tfds build

# if build is successfully
cd -r home/$USER/tensorflow_datasets/
cp full_semantic home/$USER/hole-detection/datasets/

Caution!

The work output of augment_data.py basically consists of three paths: RGB, MASK, and VIS_MASK.

VIS_MASK is not a label to be actually used, it is for visual confirmation, so do not use it in the work below.



4. Train

Because of memory allocation issues in tf.data before training, use TCMalloc to avoid memory leaks.

1. sudo apt-get install libtcmalloc-minimal4
2. dpkg -L libtcmalloc-minimal4

!! Remember the path of TCMalloc installed through #2

Training semantic segmentation


How to RUN?

When use Single gpu

LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4.3.0" python train.py

When use Multi gpu

LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4.3.0" python train.py --multi_gpu

Caution!

This repository supports training and inference in single-GPU and multi-GPU environments.
When using Single-GPU, you can set the GPU number and use it.
Take a look at train.py --help and add the setting value required for training as an argument value.



5. Eval

Evaluate the accuracy of the model after training and compute the inference rate.

Calculation items: FLOPs, MIoU metric, Average inference time

python eval.py --checkpoint_dir='./checkpoints/' --weight_name='weight.h5'

If you want to check the inference result, add the --visualize argument.

6. Predict

Web-camera or stored video can be inferred in real time.

When video realtime inference

python predict_video.py

Web-cam realtime inference

python predict_realtime.py

If you want to check the inference result, add the --visualize argument.



7. Convert TF-TRT

Provides TF-TRT conversion function to enable high-speed inference. Install tensorRT before conversion.

7.1 Install CUDA, CuDNN, TensorRT files


The CUDA and CuDNN and TensorRT versions used based on the currently written code are as follows.
Click to go to the install link.
Skip if CUDA and CuDNN have been previously installed.


CUDA : CUDA 11.1

CuDNN : CuDNN 8.1.1

TensorRT : TensorRT 7.2.2.3


7.2 Install TensorRT


Activate the virtual environment. (If you do not use a virtual environment like Anaconda, omit it)

conda activate ${env_name}

Go to the directory where you installed TensorRT, unzip it and upgrade pip.

tar -xvzf TensorRT-7.2.2.3.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.0.tar.gz
pip3 install --upgrade pip

Access the bash shell using an editor and add environment variables.

sudo gedit ~/.bashrc
export PATH="/usr/local/cuda-11.1/bin:$PATH"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/park/TensorRT-7.2.2.3/onnx_graphsurgeon
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-11.1/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/park/TensorRT-7.2.2.3/lib"

Install the TensorRT Python package.

cd python
python3 -m pip install tensorrt-7.2.2.3-cp38-none-linux_x86_64.whl

cd ../uff/
python3 -m pip install uff-0.6.9-py2.py3-none-any.whl

cd ../graphsurgeon
python3 -m pip install graphsurgeon-0.4.5-py2.py3-none-any.whl

cd ../onnx_graphsurgeon
python3 -m pip install onnx_graphsurgeon-0.2.6-py2.py3-none-any.whl

Open Terminal and check if the installation was successful.

test_python


7.3 Convert to TF-TensorRT

Pre-trained graph model (.pb) is required prior to TF-TRT transformation.
If you do not have a graph model, follow the procedure 7.3.1, if you do, skip to 7.3.2.

  • 7.3.1 If there is no graph model

    In this repository, if there are weights trained through train.py, it provides a function to convert it to a graph model.

    Enable graph saving mode with --saved_model argument in train.py. And it adds the path where the weights of the trained model are stored.

      python train.py --saved_model --saved_model_path='your_model_weights.h5'
    

    The default saving path of the converted graph model is './checkpoints/export_path/1' .

    saved_model_path


  • 7.3.2 Converting

    If the (.pb) file exists, run the script below to perform the conversion.

      python convert_to_tensorRT.py ...(argparse options)
    

    Converting the model via the TensorRT engine. The engine is built based on a fixed input size, so check the --help argument before running the script.


    The options below are provided.

    Model input resolution (--image_size), .pb file directory path (input_saved_model_dir)

    TensorRT converting model save path (output_saved_model_dir), Set converting floating point mode (floating_mode)



8. Tensorflow serving

Provides the ability to serve pre-trained graph models (.pb) or models built with the TensorRT engine.


Tensorflow serving is a tool that provides inference services within a Docker virtual environment.

Before working, install Docker for the current operating system version. (https://docs.docker.com/engine/install/ubuntu/)

# Ubuntu 18.04 docker install

# 1. Preset
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common

# 2. Add docker repository keys
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

# 3. Install
sudo apt update
sudo apt install docker-ce

If you have Docker installation and model file ready, you can run it right away. Before running, look at the options to configure the Serving server.

docker run 
--runtime=nvidia # Settings to use nvidia-gpu in docker
-t # Use tty
-p 8500:8500 # Port address to open in docker environment
--rm # Automatically delete docker containers when not in use
-v "model_path:/models/test_model2" {1}:{2} -> {1} is the path where the .pb file is located. {2} is the path where the model will be deployed in Docker (request request using the name test_model2)
-e MODEL_NAME=test_model2 # gRPC, model name to be called in REST API
-e NVIDIA_VISIBLE_DEVICES="0" # gpu number to use
-e LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 # Specifying cuda-11.1 environment variables for building TensorRT engine (TensorRT engine 7.2.2.3)
-e TF_TENSORRT_VERSION=7.2.2 tensorflow/serving:2.6.2-gpu # Set the TensorRT version and install the tensorflow-gpu version for that version
--port=8500 # Port number to use when serving (must be the same as Docker port setting)

Additional information can be found with the --help argument at the end of the command.

Please refer to tf_serving_sample.py for an example of accessing the Tensorflow-serving server and making an inference request.

References


  1. DDRNet : https://github.com/ydhongHIT/DDRNet
  2. CSNet-seg : https://github.com/chansoopark98/CSNet-seg
  3. A study on lightweight networks for efficient object detection based on deep learning
  4. Efficient shot detector