Question about ReverseLayerF
songweige opened this issue · 7 comments
Greetings! Could you give me some quick explanation about why ReverseLayerF() could lead the gradients of parameters in domain classifier to negative? I'm confused since that I though it would instead influence the parameters before domain classifier (in the feature layer). Could you please correct me a little bit? Thanks in advance!
ReverseLayerF() actually doesn't affect the parameters in domain classifier, since when forward propagation, it equals to 1 https://github.com/fungtion/DANN/blob/master/models/functions.py#L10
and when backward propagation, it only make the gradients from domain classifier to feature layer negative, i.e. only affect the parameters before domain classifier as you thought .
Thx, it makes sense!
BTW, why did you use "x.view_as(x)" as return? Is it a convention or a double check?
It is necessary, if you return x which is also the input of forward(), backward() will not be recalled.
I see. Thx!
Greeting! I have a question about ReverseLayerF.apply() in the model definition. There is no explicit definition of the apply() function. What would happen while executing ReverseLayerF.apply()
Greeting! I have a question about ReverseLayerF.apply() in the model definition. There is no explicit definition of the apply() function. What would happen while executing
ReverseLayerF.apply()
Hi,do u understand how does the ReverseLayerF.apply()
work?