/PortraitNet

Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device" @ CAD&Graphics2019

Primary LanguageJupyter Notebook

PortraitNet

Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019


Introduction

We propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference.

Portrait segmentation applications on mobile device.


Experimental setup

Requirements

  • python 2.7
  • PyTorch 0.3.0.post4
  • Jupyter Notebook
  • pip install easydict matplotlib tqdm opencv-python scipy pyyaml numpy

Download datasets

  • EG1800 Since several image URL links are invalid in the original EG1800 dataset, we finally use 1447 images for training and 289 images for validation.

  • Supervise-Portrait Supervise-Portrait is a portrait segmentation dataset collected from the public human segmentation dataset Supervise.ly using the same data process as EG1800.


Training

Network Architecture

Overview of PortraitNet.

Training Steps

  • Download the datasets (EG1800 or Supervise-Portriat). If you want to training at your own dataset, you need to modify data/datasets.py and data/datasets_portraitseg.py.
  • Prepare training/testing files, like data/select_data/eg1800_train.txt and data/select_data/eg1800_test.txt.
  • Select and modify the parameters in the folder of config.
  • Start the training with single gpu:
cd myTrain
python2.7 train.py

Testing

In the folder of myTest:

  • you can use EvalModel.ipynb to test on testing datasets.
  • you can use VideoTest.ipynb to test on a single image or video.

Visualization

Using tensorboard to visualize the training process:

cd path_to_save_model
tensorboard --logdir='./log'

Download models

from Dropbox:

from Baidu Cloud: