/Human_Body_Segmentation

A Deep Learning project focuses on Semantic Segmentation of Human Body and hence changing the background

Primary LanguageJupyter Notebook

Human_Body_Segmentation

A Deep Learning project focuses on Semantic Segmentation of Human Body

This projects helps predicting segmentation masks of Human Body and hence changing background. I used 2 models for training: Unet with MobileNetV2 as a backbone and DeepLabV3p with MobileNetV2 as a backbone. Performance for both of the models on validation dataset trained for 45 epochs is given below:

Model Precision Recall F1-score IoU
Unet 0.9195 0.8912 0.9044 0.8267
DeepLabV3p 0.9069 0.9131 0.9095 0.8348

All various modes used for changing the background are mentioned below:

  • 0: Picture in BG
  • 1: Video in BG
  • 2: Blurred Picture in BG
  • 3: Blurred Video in BG
  • 4: B/W Picture in BG
  • 5: B/W Video in BG

Note: Whenever the keys (0, 1, 2, 3, 4, 5) are pressed, background modes will be changed, accordingly.

Predictions using Unet:

Original Prediction Overlay
Unet_1 Unet_2 Unet_3

Predictions DeepLabV3p:

Original Prediction Overlay
DeepLabV3p_1 DeepLabV3p_2 DeepLabV3p_3
Predictions on webcam using Unet Predictions on webcam using DeepLabV3p
Webcam_Unet Webcam_DeepLabV3p

Project Structure

  1. config.ini is the configuration file used to specify the parameters such as model_selection, prediction_type, input_file_path, BG_mode, and save_path.
  2. predict.py file contains code for prediction.
  3. utils.py file contains all helper functions for changing the background.
  4. Underwater.mp4 and bg.jpg are used for background.
  5. train folder training jupyter notebook for Unet and DeepLabV3p.
  6. Models folder contains .py files for Unet and DeepLabV3p along with their weights (.h5 file).
  7. predictions folder contains prediction of both models on random online images as well as videos taken live from Webcam.
  8. requirement.txt file contains all the required dependencies.

To run the prject, follow below steps

  1. Ensure that you are in the project home directory
  2. Create anaconda environment
  3. Activate environment
  4. pip install -r requirement.txt

  5. set the parameters in the config.ini file
  6. python init.py

Please feel free to connect for any suggestions or doubts!!!

Credits

  1. The credits for dataset used for training goes to https://www.kaggle.com/tapakah68/supervisely-filtered-segmentation-person-dataset
  2. I have referred https://github.com/bonlime/keras-deeplab-v3-plus/ repository for DeepLabV3p model.
  3. The credit for images and videos used for prediction and background goes to:
For better predictions, we need better image quality dataset for training and train it for more epochs with different backbones.