fredzzhang/upt

Multiple loss training code

Closed this issue · 2 comments

Hi, @fredzzhang :

I want to try training with multiple losses. I found the relevant code. I added a loss, which is running and no error is reported.

but I want to successfully train multiple loss and set the hyperparameters of loss, how do I do it?

if self.training:

        interaction_loss = self.compute_interaction_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
        interaction_x_loss = self.compute_interaction_x_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
        loss_dict = dict(
            interaction_loss=interaction_loss,
            interaction_x_loss = interaction_x_loss
        )
        return loss_dict

def _on_each_iteration(self):

    loss_dict = self._state.net(
        *self._state.inputs, targets=self._state.targets)
    if loss_dict['interaction_loss'].isnan():
        raise ValueError(f"The HOI loss is NaN for rank {self._rank}")

    self._state.loss = sum(loss for loss in loss_dict.values())
    self._state.optimizer.zero_grad(set_to_none=True)
    self._state.loss.backward()
    if self.max_norm > 0:
        torch.nn.utils.clip_grad_norm_(self._state.net.parameters(), self.max_norm)
    self._state.optimizer.step()

yaoyaosanqi.

Hi @yaoyaosanqi,

It's fairly trivial. How about this

        loss_dict = dict(
            interaction_loss=interaction_loss,
            interaction_x_loss=alpha * interaction_x_loss
        )

with alpha being the weight on the additional loss.

Fred.

Thanks!