cgtuebingen/Neural-PIL

How are the segmentation masks obtained?

Opened this issue · 4 comments

Hello,
thank you for the great work!!
I want to try Neural-PIL on my own dataset, could you please let me know how were the segmentation masks generated?

Thank you for your time.

I am not associated with the authors, but I have trained the model recently on a custom dataset. You can try using U2Net which can be found at https://github.com/xuebinqin/U-2-Net. It has worked quite well for me.

Have you figured out how to get the decompositions by running the code? I was only able to train.

Hi @ArenaGrenade, Thank you for the suggestion, U2Net looks great!
-> "Have you figured out how to get the decompositions by running the code? I was only able to train."
I just finished training the network on example data (Gnome), and am about to check how to get the decompositions. I will come back to the discussion if I figured it out.

Any update on decompositions? I'm also working on the same problem. Is there code for decomposition of the original NeRD model?

I am not associated with the authors, but I have trained the model recently on a custom dataset. You can try using U2Net which can be found at https://github.com/xuebinqin/U-2-Net. It has worked quite well for me.

Have you figured out how to get the decompositions by running the code? I was only able to train.

It seems that the outputs include the decompositions of the pictures in training datasets and a few additional test images' decompositions decided automatically. But I don't know how to generate a new picture's decompositions with the trained model. Do you find any solution in the end?