yucornetto/MGMatting

How to get the rough mask?

Closed this issue · 3 comments

Hello! Thank you for providing the training code.I have successfully trained the model. I also emailed you to provide test images. Now I want to ask how to obtain the rough mask ? Which saliency detection model do you use? Because in my opinion, the rough mask has affected his prediction effect. Looking forward to your reply! Thank you!

Hi, in our paper, we used [1] to generate the segmentation masks, which are also included in our real-world portrait dataset. We have also tried DeepLabV3+ to predict the base masks, which works also great in most cases.

Please note that the current code does not include real-world augmentation, which is very important to achieve good performance on real-world images, as pointed out in the paper. We will update the code and release pretrained models in the next few days. Please stay tuned :)

[1] He Zhang, Jianming Zhang, Federico Perazzi, Zhe Lin, and Vishal M Patel. Deep image compositing. WACV, 2021

Thank you for your reply, I still have a little question. I also tried to reproduce the article[1] Deep Image Composition, but there was no good result. Haven't seen the open source code of the original author? So did you successfully reproduce this paper? Or you can give me some guiding opinions.
Another foreground fg map in the training data in your paper is the processed data generated by the FBA paper? Then the image input into the network is generated by fg, bg, alpha, is this the case?

Hi, the masks in the dataset is generated from an internal portrait segmentation algorithm and that paper is just a part of the reference.

For foreground fg, we have not preprocessed it with FBA method. Maybe a preprocess can lead to better results.