garvita-tiwari/neuralgif

About training loss in paper and implementation

SuwoongHeo opened this issue · 0 comments

Hi,

In the paper, you said that the pretaining is conducted with the supervision of blend weights only (wgt in the below code). However, there are several losses except the skinning weight loss named diff_can and spr_wgt.

loss_dict['wgt'] = weight_loss
loss_dict['diff_can'] = (diff_can +diff_can_bp)/2.0
loss_dict['spr_wgt'] = spr_wgt
#total_loss = weight_loss + self.loss_weight['diff_can']*loss_dict['diff_can']
total_loss = weight_loss + self.loss_weight['diff_can']*loss_dict['diff_can'] + self.loss_weight['diff_can']**loss_dict['spr_wgt']

The definition of diff_can is obvious but spr_wgt defined below is hard to understand for me. Could you give me an explanation about what this loss means or any reference related to this loss?

spr_wgt = (weight_pred.abs() + 1e-12).pow(0.8).sum(1).mean() + (
weight_smpl.abs() + 1e-12).pow(0.8).sum(1).mean()

Thanks in advance :)