/deep-learning-training-gui

Train and predict your model on pre-trained deep learning models through the GUI (web app). No more many parameters, no more data preprocessing.

Primary LanguagePythonMIT LicenseMIT

logo

Description

My goal is to simplify the installation and training of pre-trained deep learning models through the GUI (or you can call web app) without writing extra code. Set your dataset and start the training right away and monitor it with TensorBoard or DLTGUI tool. No more many parameters, no more data preprocessing.

While developing this application, I was inspired by the DIGITS system developed by NVIDIA.

  • You won't have any problems for training image classification algorithms.
  • It is easy to train a image classification model, save the model, and make predictions from the saved model.
  • A few parameters!
  • You will be able to train on pre-trained models.
  • It doesn't exist for 1.0 but, it will be much easier to train and use object detection algortihms.
  • You can train your model on GPU or CPU.
  • Parallel operation is possible.
  • You won't be needing a second terminal and a script code to run TensorBoard.

In the words of Stephen Hawking:

Science is beautiful when it makes simple explanations of phenomena or connections between different observations. Examples include the double helix in biology and the fundamental equations of physics.

Guide - Youtube Video (Coming Soon)

Before Training

Updates

DLTGUI Version 1.0.9

  • Bug Fixes (There was a problem about showing heatmap for Cuda >= 10.0, fixed).

DLTGUI Version 1.0.8

  • Bug fixes.

DLTGUI Version 1.0.7

  • Many bugs have been solved.

  • You will be able to Fine-Tuning your model. In this way, you can easily increase the success rate of the model.

  • You will be able to see which parts your model focuses on while classifying images (Class activation map, heat map - heatmap - available for MobileNetV2 only)

DLTGUI Version 1.0.6

  • Bug fixes.

DLTGUI Version 1.0.5

  • Now you can do data augmentation using Augmentor.

DLTGUI Version 1.0.4

  • Now you can choose CPU or GPU before the training.
  • You are able to choose activation function for singe-class training. (Sigmoid and ReLu [new])
  • Added SimpleCNNModel
  • Fixed bugs

DLTGUI Version 1.0.2

  • Fixed single class problem, now you can train one-class model,
  • Added sigmoid as activation function and binary_crossentropy as loss function,
  • Added new function to DLGUI (prepare_data, sigmoid and more)
  • Added new example dataset.

DLTGUI Version 1.0.1:

  • Now you can use InceptionV3, VGG16, VGG19 and NASNetMobile models. [Image Classification]

Getting started

Prerequisites

  • Anaconda 64-bit
  • Python 3.7.3
  • Tensorflow 2.0.1
  • CUDA and CUDNN ( Minimum Cuda 10.0 - for gpu usage)
  • Numpy 1.16.4
  • Matplotlib
  • PIL
  • subprocess
  • pathlib
  • Augmentor

Available models

  • MobileNetV2
  • Inception V3
  • VGG16
  • VGG19
  • NASNetMobile
  • SimpleCnnModel

Dataset Folder Structure

The following is an example of how a dataset should be structured. Before you train a deep learning model, put all your dataset into datasets directory.

├──datasets/
    ├──example_dataset/
        ├── cat
        │   ├── img_1.jpg/png
        │   └── img_2.jpg/png
    ├──flower_photos/
        ├── daisy
        │── dandelion
        │── roses
        │── sunflowers
        │── tulips
        
For image classification.

Usage

Page - Home

  1. Clone this repo.
  2. cd Deep-Learning-Training-GUI
  3. On your conda terminal: pip install -r requirements.txt
  4. Set your dataset directory as I show above.
  5. When you set your dataset, go to the terminal and run python app.py. You can access the program on localhost:5000
  6. Now you will see the home page.

Home

Page - Training - Parameters

Training

  1. You must enter the path where your dataset is located. For example, I want to select the flower_photos folder in the datasets and I will write to the form element like this: datasets/flower_photos
  2. Split the dataset, we need to specify what percentage of the training data we will use as a test.
  3. Pre-trained Models - Currently only MobileNetV2 is available, but in future versions you can easily select other pre-trained models for fine-tuning [not available yet].
  4. CPU / GPU - You need to specify whether you want to train on the GPU or CPU (the first version will automatically run on the GPU).
  5. Number Of Classes - I'll go again from the flower_photos example. There are 5 separate folders under the flower_photos folder. This is our class count. When you train your own data set, you have to create as many folders here as you have classes.
  6. Batch Size - Specifies whether the training samples are uploaded to the training network in escapes. If you have a 1080 Ti or better GPU, you can set it to 64 or 128. The higher Batch Size, less noise that the model learns.
  7. Epoch - The number of training data shown to the model network. So if you make 10 Epoch, the training data will be shown to the model network 10 times.

Training and TensorBoard

When you start to training, you will be able to access TensorBoard without writing any script on terminal! Check localhost:6006

Training-Live

Prediction

Prediction

Result

Result

Contributing

Contributions with example scripts for other frameworks (PyTorch or Caffe 2) and other pre-trained models are welcome!

Guidelines

Coming soon.

Contributors

To-Do List

  • Release 5 pre-trained models.
  • Choosing CPU or GPU before the training.
  • Choosing Activation Function for singe-class training. (Sigmoid and ReLu)
  • Data Augmentation
  • Fine-Tuning
  • Heatmap on predicted images.
  • Object Detection - Mask RCNN.

References 📚