This repository uses Tensorflow Object Detection(TFOD) to implement a training pipeline for multiple object detection applications based on Single Shot Multibox Detector(SSD). The pipeline is trained and tested on a regular vehicle dataset.
The entire training and inference process takes place in a docker container and the Dockerfile is provided to pull and build. The Dockerfile is based on TFOD docker image. The main branch of this repository uses Tensorflow 1.x. It is recommended to use a GPU-Powered machine for training process. An SSD model is able to operate in real-time(up to 59 fps on 300x300pxl input images).
Please following this instructions to install docker engine on your ubuntu 18/20.04 machine. It is highly recommended to use an NVIDIA-GPU on your machine. In order to use your GPU in your docker image, you must install nvidia-docker. You only need to do Step #2.
A basic docker container is built and pushed into docker hub. Once you have both docker engine and nvidia-docker installed on your ubuntu machine, open a terminal and pull the docker image.
docker pull behnammoradi026/deep_learning:tfod
Note: CUDA is already installed on this docker image.
Once the docker image is pulled, run the following script to run and enter the container:
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
# Run and enter the docker container
source scripts/run_docker.sh
In order to install essential dependencies for annotation and testing tensorflow object detection (TFOD) API, please run the following script. Note: Make sure your terminal is inside docker container!
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
# Install dpendencies.
source scripts/install_tfod.sh
If your TFOD is properly installed, then you should see the following results on your terminal:
[ OK ] ModelBuilderTF1Test.test_invalid_model_config_proto
[ RUN ] ModelBuilderTF1Test.test_invalid_second_stage_batch_size
[ OK ] ModelBuilderTF1Test.test_invalid_second_stage_batch_size
[ RUN ] ModelBuilderTF1Test.test_session
[ SKIPPED ] ModelBuilderTF1Test.test_session
[ RUN ] ModelBuilderTF1Test.test_unknown_faster_rcnn_feature_extractor
[ OK ] ModelBuilderTF1Test.test_unknown_faster_rcnn_feature_extractor
[ RUN ] ModelBuilderTF1Test.test_unknown_meta_architecture
[ OK ] ModelBuilderTF1Test.test_unknown_meta_architecture
[ RUN ] ModelBuilderTF1Test.test_unknown_ssd_feature_extractor
[ OK ] ModelBuilderTF1Test.test_unknown_ssd_feature_extractor
----------------------------------------------------------------------
Ran 21 tests in 0.135s
OK (skipped=1)
There are several tools for image annotation. The best one is labelImg which is installed on the docker image. Note: SSD-Object-Detection-TFOD-Training-Pipeline is shared between your host and the docker image. Whatever you paste or generate inside that folder will remain after the docker image is down. To start annotating data: 1- Split your dataset into training data (90%) and testing data (10%). 2- Create your project folder in dataset dirrectory:
cd SSD-Object-Detection-TFOD-Training-Pipeline/dataset
mkdir <your project name>
cd <your project name>
mkdir train test
3- Paste your training and testing images in their related directories. 4- In your docker terminal, open labelImg and start annotating your training and testing data.
cd SSD-Object-Detection-TFOD-Training-Pipeline# cd dataset/<your project name>/annotations/train
# Run the annotation tool
labelImg
5- start annotating your dataset:
Note: Working with labelImg is simple and straightforward. This tutorial explains the process. Note: Make sure you run labelImg inside the docker container.
TFOD uses the record files to read training and testing images and annotations. Once your annotation process (for both training and testing images) is done, then generate the record files:
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
export PROJECT_NAME=<your project name>
# Generate your record files
cd SSD-Object-Detection-TFOD-Training-Pipeline/scripts
source run_generate_tfrecords.sh
The label map file is simple to generate.
cd SSD-Object-Detection-TFOD-Training-Pipeline/dataset/<your project name>/records
touch classes.pbtxt
I assume you have one label for your dataset. Open classes.pbtxt and paste the followings:
item {
id: 1
name: '<your label>'
}
Note: replace "" with your label name. Note: If you have more than one label (for example two labels), then you must add another item i your classes.pbtxt file:
item {
id: 1
name: '<your label #1>'
}
item {
id: 2
name: '<your label #2>'
}
In order to start training process, you must download ssd_inception_v2:
cd SSD-Object-Detection-TFOD-Training-Pipeline/workspace/<your project name>/training
mkdir exported_model training
cd training
wget http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz
tar -zxvf ssd_inception_v2_coco_2018_01_28.tar.gz
cd SSD-Object-Detection-TFOD-Training-Pipeline/network_configuration/
mkdir <your project name>
cd <your project name>
# You can use the template inside vehicle_detection project.
cp ../vehicle_detection/ssd_hypes.config ./
# Open the file with your text editor and replace the <YOUR PROJECT TITLE> with your own project name in the entire file.
Once your configuration file is ready, you can start the training process:
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
export PROJECT_NAME=<your project name>
source scripts/train.sh
Once the training process is done, you must generate your model from the generated checkpoints:
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
export PROJECT_NAME=<your project name>
# Open the scripts/export_model.sh and meke sure your pointing to the right TRAINED_CHECKPOINT_PREFIX
source scripts/export_model.sh
Your frozen_inference_graph.pb is saves inside SSD-Object-Detection-TFOD-Training-Pipeline/workspace//exported_model
# Navigate to the repository's main directory
cd SSD-Object-Detection-TFOD-Training-Pipeline
# Open scripts/inference.sh and edit the RAW_IMAGE(Enter your own name at the end of the line)
# Save the file
export PROJECT_NAME=<your project name>
source scripts/inference.sh
# The output is saved in this directory!
Tensorflow Object detection repository