fidler-lab/polyrnn-pp-pytorch

Lovasz softmax instead of RL finetuning?

shivamsaboo17 opened this issue · 3 comments

Lovasz softmax as described in this paper (https://arxiv.org/pdf/1705.08790.pdf) is differentiable loss and can be used to optimize the intersection over union. Did you try to use it instead of RL fine-tuning? What can we expect if we use this instead of the RL finetuning?

The problem of rendering a polygon into a mask differentiably still remains an issue in this case afaik.

In our curve-GCN paper, we differentiably rendered the polygon into a mask and then used a loss function in pixel space to train.

Thanks for replying. I was following the curve-GCN paper and had a doubt regarding how you kept the rendering process differentiable? More precisely I was not able to relate the code (TriRender2d class) and the paper. Can you please briefly describe what is actually happening in the code (how triangles are rendered in pytorch so that we can utilize the autograd?) or provide a resource that would help me to understand this process better? Thanks!

Closing here since the discussing is being followed on fidler-lab/curve-gcn#6