yucornetto/MGMatting

Poor quality on our own dataset

herbiezhao opened this issue · 4 comments

I don’t know if it’s the problem that I use. The effect on the RealWorldPortrait-636 dataset is not good. I set image-dir: image, mask-dir: segmask. Are there other parameters that can be adjusted?

The background distinguishing ability of our segmentation model is very strong, but it does not handle well at the edges. After using this model, it will affect the ability of background discrimination. Does it mean that the method of using mask is also very dependent on the data set, but the matting data set is difficult to obtain.

Hi, have you excluded those transparent objects in DIM training set when you train the model for the real-world benchmark? As mentioned in our README:

Please note that we exclude the transparent objects from DIM training set for a better generalization to real-world portrait cases. You can refer to /utils/copy_data.py for details about preparing the training set. Afterwards, you can start training using the following command:

My personal experience is that, when targeting solid objects, e.g., portraits, including those transparent objects can affect the semantic learning of the model, and results in some bad noise in background areas. Also, simulating real-world noises is also necessary.

I haven't trained the model yet, I used the pre-trained model provided. Do I need to retrain to achieve better results?

That's interesting. Would you mind sharing one or two samples (both images and masks) with me at yucornetto@gmail.com and I can try to see what is wrong?

I just noticed that you seem mentioned both RealWorldPortrait-636 and your own dataset, right? Can you reproduce the results on RealWorldPortrait-636? What command did you use?

@yucornetto Have you solved your problem?