j96w/DenseFusion

Train SegNet with own dataset to get masks like in preprocessed linemod

Closed this issue · 1 comments

Hey there and thank you again for you work!!
I have a question concerning the training of SegNet. How do I train it with my own dataset to get an output like in the prepocessed linemod dataset (only for one object in one picture)? Otherwise Segnet will just throw me Segmentation masks for all my objects in the picture, right? Do I have to train SegNet for each product type, or how does it work? I hope this is not a to stupid question^^
Thank you in advance!

j96w commented

Hi, you can regard the channel of the target object as a binary mask to get the same masks like the segmentation prediction results in the preprocessed linemod.