/DamageDetect

A deep learning project that investigate the possibility to detect alterations on street electronic cabinets

Primary LanguageJupyter Notebook

State predictions of EDP electronic cabinets

Repository containing the full workflow to:

  • process, split and prepare data for object detection training in YOLOv4 (src/dataug.py, src/datasplit.py).
  • Harness the training weights for prediction and cropping of the desired object (src/predict.py)
  • Classify the state of EDP electronic cabinets based on cropped images

Installation & Setup

Requirements

  • Ensure that you have git & dvc installed in your machine (git installation, dvc installation).
  • Ensure that python >= 3.6 is installed
  • Ensure that conda is installed in your environment (prefer miniconda that is more lightweight, here)

Repository

To use the repository go to your working folder, open a terminal and run:

git clone https://gitlab.com/fbraza/edp-altran.git

You should be prompted to enter your gitlab login and password. Next to install dependencies run:

conda env create --name [name_of_your_env] -f environment.yaml

Finally once you have set up everything, go to the root folder of the repository, open a terminal and run:

dvc pull

This will get the last version of the data for this project. The project structure should look like the following:

.
├── app.py
├── assets
├── data
├── data.dvc
├── environment.yml
├── __init__.py
├── notebooks
├── README.md
├── REPORT.md
├── src
└── test

Try to sick to the structure of the data folder. It should look like this:

.
├── lab_images         # folder for labelled images
├── metrics            # weights, yolo config and metrics are found here
├── raw_images         # raw images as source data, should be immutable
├── results_crop       # image output of our predicted crop
├── results_pred       # image output of our predicted electric cabinet 
├── tra_images         # augmented / transformed images
├── untracked_images   # all images you want to keep locally not tracked by dvc
└── yol_images         # it is all the data prepared for the yolo model

The structure of the data/yol_images should look like this:

.
├── obj        # a folder that contains all augmented images part of the training set
├── obj.data   # a file generated by CVAT after data labeling
├── obj.names  # a file generated by CVAT after data labeling
├── test       # a folder that contains all augmented images part of the validation / testing set
├── test.txt   # text file that contains the relative path for each images present in the testing set
└── train.txt  # text file that contains the relative path for each images present in the training set

Images labeling

In this project we used the online version of CVAT to label our data. You can still, if required install it locally on docker container (documentation here) (the online version has some limitations, only 10 tasks by people and the size of the data should lower then 500MB). The software is pretty intuitive and easy to use but if needed head over the documentation. After you finished labeling the data you can download your images with the corresponding labeling meta-data & data. The typical structure for a YOLO project will look like this:

.
├── obj.data             # a file generated by CVAT after data labeling
├── obj.names            # a file generated by CVAT after data labeling
├── obj_train_data       # a folder that contains all images and their respective labels as text files
└── train.txt            # text file that contains the relative path for each images present in the obj_train_data folder

With this folder:

  • move the obj.data & obj.names into your data/yol_images folder.
  • grab the data inside the obj_train_data folder and move it to your data/lab_images. From now on you are ready to process and prepare your data for YOLO

YOLO training

For object detection training we used the version 4 of YOLO. Don't be tricked by the naming Darknet which is actually the name of the neural network architecture. If you want to re-run the training from scratch with YOLO, please first check if you have the C environment installed. Also if you want to have the possibility to use the GPU check that cuda and nvidia drivers are installed and up-to-date. Then to use YOLO on your computer, you need to clone the repository:

git clone https://github.com/AlexeyAB/darknet

Next you need to edit the value of some C constants in the Makefile.

GPU=0        # change to 1 to speed on graphic card
CUDNN=0      # change to 1 to speed on graphic card
CUDNN_HALF=0 # change to 1 to further speed up on graphic cards (only works for powerful GPU)
OPENCV=0     # change to 1 if you use openCV

To modify these values y can use a command similar to the one below:

sed -i 's/OPENCV=0/OPENCV=1/' Makefile
sed -i 's/GPU=0/GPU=1/' Makefile
sed -i 's/CUDNN=0/CUDNN=1/' Makefile
sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile

Once done you can compile the code using:

make

Before running the training you need to configure some parameters. In darknet/cfg/ you have a list of configuration file with the extension .cfg. These files defines the way the architecture will be used depending on whether you use full or lite YOLO models. In our case we used the yolov4.cfg and edit it as followed:

  • batch = 64 and subdivisions = 16 for ultimate results. If you run into memory issues then up subdivisions to 32.
  • width = 416, height = 416 (these should be multiple of 32, 416 is standard)
  • Make the rest of the changes to the .cfg file based on how many classes you have to train your detector with.
    • to determine the number of max_batches use the following formula: max_batches = (# of classes) * 2000 (do not go below 6000 anyway). Modify the value accordingly in the three yolo layers
    • to determine the number of filters use the following formula: filters = (# of classes + 5) * 3. Modify the value accordingly in the three convolutional layers

Usage

Overview of the CLI app

The interface of the repository has been encapsulated into a CLI application. Running the following command will output the helper documentation.:

python app.py --help

Usage: app.py [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  predict-and-output  localize boxes and scrap them out
  prepare             load img, split into train-val and prepare for yolo
  transform           Transform and augment a set of raw images

To get help on any commands run

python app.py [COMMAND] --help

Example:

# python app.py prepare --help
Usage: app.py prepare [OPTIONS]

  load img, split into train-val and prepare for yolo

Options:
  --path_in TEXT        Path of transformed images  [required]
  --split-factor FLOAT  Factor to split into training and validation sets
  --help                Show this message and exit.

Preparing data for YOLO

To get the data ready for YOLO run the app.py transform & prepare command subsequently. One thing to be careful with is that each time you launch this command you actually modify the augmented dataset & the shuffling of the training and testing sets. Keep your gitflow & dvcflow practices sharp to track any changes of the data and relate them to the ongoing experiment.

Training with YOLO

To launch a new training with YOLOv4, prepare your data accordingly (cloning the repository, getting the data with dvc and run the transform and prepare commands from our CLI app). Then move the content of the data/yol_images to darknet\data and run:

./darknet detector train data/obj.data cfg/yolo4-custom.cfg yolov4.conv.137 -map

The training should take from several hours to days depending on the computer power you have access to and the number of classes you train your model with. Once finished run the following command to get the metrics on a validation set.

# ./darknet detector map data/obj.data [config-file] [weights saved in the darknet/backup folder]
./darknet detector map data/obj.data cfg/yolov4-custom.cfg backup/yolov4-custom_5000.weights

If you want you can directly output the metrics into a text file. We recommend saving it into the data/metrics folder of this repository

./darknet detector map data/obj.data cfg/yolov4-custom.cfg backup/yolov4-custom_5000.weights > [your_repo]/data/metrics.txt

Author

Faouzi Braza For Expertise Center of AI & Analytics, Altran (Portugal, mail: joao.neves@altran.com)

Repository link

TODO

  • WRITE TESTS!!!!!
  • Rewrite the dataug.py in OOP
  • Use the Path python library to deal with paths
  • Separate concerns with the datsplit.py
  • Think about automating processes
    • creating obj.data and obj.names
    • configuring yolo .cfg files
    • create directories for training and backup weights
    • use os.process python module to run ./darknet commands
    • automate training and validation
  • Find a way to track validation metrics with MLflow