Arxiv Preprint | Supplementary Video
This is the official project of our paper Is a Green Screen Really Necessary for Real-Time Portrait Matting?
MODNet is a trimap-free model for portrait matting in real time (on a single GPU).
Our amazing demo, code, pre-trained model, and validation benchmark are coming soon!
I have received some requests for accessing our code. I am sorry that we need some time to get everything ready since this repository is now supported by Zhanghan Ke alone. Our plans in the next few months are:
- We will publish an online image/video matting demo along with the pre-trained model in these two weeks (approximately Dec. 7, 2020 to Dec. 18, 2020).
- We then plan to release the code of supervised training and unsupervised SOC in Jan. 2021.
- We finally plan to open source the PPM-100 validation benchmark in Feb. 2021.
We look forward to your continued attention to this project. Thanks.
- [Nov 24 2020] Release Arxiv Preprint and Supplementary Video.
If this work helps your research, please consider to cite:
@article{MODNet,
author = {Zhanghan Ke and Kaican Li and Yurou Zhou and Qiuhua Wu and Xiangyu Mao and Qiong Yan and Rynson W.H. Lau},
title = {Is a Green Screen Really Necessary for Real-Time Portrait Matting?},
journal={ArXiv},
volume={abs/2011.11961},
year = {2020},
}