Arxiv Preprint | Supplementary Video
WebCam Video Demo [Offline][Colab] | Custom Video Demo [Offline] | Image Demo [WebGUI][Colab]
- [Mar 12 2021] Support TorchScript version of MODNet (from the community).
- [Feb 19 2021] Support ONNX version of MODNet (from the community).
- [Jan 28 2021] Release the code of MODNet training iteration.
- [Dec 25 2020] Merry Christmas! 🎄 Release Custom Video Matting Demo [Offline] for user videos.
- [Dec 10 2020] Release WebCam Video Matting Demo [Offline][Colab] and Image Matting Demo [Colab].
- [Nov 24 2020] Release Arxiv Preprint and Supplementary Video.
We provide two real-time portrait video matting demos based on WebCam. When using the demo, you can move the WebCam around at will.
If you have an Ubuntu system, we recommend you to try the offline demo to get a higher fps. Otherwise, you can access the online Colab demo.
We also provide an offline demo that allows you to process custom videos.
We provide an online Colab demo for portrait image matting.
It allows you to upload portrait images and predict/visualize/download the alpha mattes.
Here we share some cool applications/extentions of MODNet built by the community.
- WebGUI for Image Matting
You can try this WebGUI (hosted on Gradio) for portrait matting from your browser without code!
-
Colab Demo of Bokeh (Blur Background)
You can try this Colab demo (built by @eyaler) to blur the backgroud based on MODNet! -
ONNX Version of MODNet
You can convert the pre-trained MODNet to an ONNX model by using this code (provided by @manthan3C273). You can also try this Colab demo for MODNet image matting (ONNX version). -
TorchScript Version of MODNet
You can convert the pre-trained MODNet to an TorchScript model by using this code (provided by @yarkable).
We provide the code of MODNet training iteration, including:
- Supervised Training: Train MODNet on a labeled matting dataset
- SOC Adaptation: Adapt a trained MODNet to an unlabeled dataset
In the function comments, we provide examples of how to call the function.
- Release the code of One-Frame Delay
- Release PPM-100 validation benchmark (Delayed, But On The Way...)
NOTE: PPM-100 is a validation set. Our training set will not be published.
This project (code, pre-trained models, demos, etc.) is released under the Creative Commons Attribution NonCommercial ShareAlike 4.0 license.
NOTE: The license will be changed to allow commercial use after this work is accepted by a conference or a journal.
- We thank City University of Hong Kong and SenseTime for their support to this project.
- We thank
the Gradio team, @eyaler, @manthan3C273, @yarkable,
for their contributions to this repository or their cool applications based on MODNet.
If this work helps your research, please consider to cite:
@article{MODNet,
author = {Zhanghan Ke and Kaican Li and Yurou Zhou and Qiuhua Wu and Xiangyu Mao and Qiong Yan and Rynson W.H. Lau},
title = {Is a Green Screen Really Necessary for Real-Time Portrait Matting?},
journal={ArXiv},
volume={abs/2011.11961},
year = {2020},
}
This project is currently maintained by Zhanghan Ke (@ZHKKKe).
If you have any questions, please feel free to contact kezhanghan@outlook.com
.