yucornetto/MGMatting

foreground color supervision

uestc-buddy opened this issue · 4 comments

Thank you for your wonderful work. I'm confused about part of Section 3.2 of the paper, which mentions "to generate synthetic training data by blending a foreground image and a background image using a randomly selected alpha matte" in it. Can you explain the specific process of foreground color data synthesis and supervision training?

Thanks for your interest in our work. The composition-1k is a synthetic dataset that provides both foreground and alpha matte for training, which is composited to different backgrounds to generate training samples.

Our RAB will choose a random foreground image for each alpha matte when synthesizing the training samples. This random-picked foreground image can be used as the ground-truth for foreground supervision.

Thank you for your reply. I guess what you mean is that foreground images and alpha mattes are not paired (in terms of pixel position), they are randomly combined. Then randomly select a background image to get the final composite image, and predict the foreground image on this basis. Is that so?

Finally, for further research, when will your code and training dataset be available?

Yes, the foreground images and alpha mattes are not paired.

We plan to release code/model/dataset upon acceptance of this paper. Before that, if you want to compare the performance of MG Matting, feel free to drop us an e-mail with your test samples.

Thank you for your reply. Now I focus on matting in any complex background. I will send you some test samples when I need to compare your models. Thank you.

And good luck.