nv-tlabs/nglod

Question about the loss for training the network

Closed this issue · 2 comments

Thanks for releasing your well-organized codes.

When read and ran the codes for training a network according to your instructions, I found that it seems that only the sdf values predicted by the deepest level were used for calculating the loss.

The figure below shows the codes from 'sdf-net/lib/models/OctreeSDF.py':

image

And in the paper, I noticed that Formula (4) takes sum of losses calculated for results from each level.

Is there anything that I misunderstood ?

Thanks for your interest in our code!

The _l2_loss employed here is just for logging purposes and not for training. To use the L2 loss in training, you'll have to pass in l2_loss as an argument --loss l2_loss.

The API / logic around here is a bit confusing (and suboptimal in terms of training perf) though so I'm planning to push some code soon to make some of this stuff a bit more streamlined.

Thanks a lot for your patience. I just found this fact. And I found you have replied me.