saist1993/DPNLP

How the encoder, target classifier and adv in the code reverse the gradient?

xyz321123 opened this issue · 1 comments

In the src/training_loops/simple_loop.py file, I know that the first red box is the adv loss, and the second red box is the total loss for the encoder and target classifier, but why only the total loss is backward in the third red box, what about adv?
image

Hey! In line 74 we are adding the main loss and the aux loss (adversarial loss). Thus when the backward is called at line 85, the gradients are propagated to both adversarial and classifier branch.

As for how does the gradient get reversed. In the model file , there is a gradient reversal function which reverses the gradient.