BIT-DA/JADA

寻求帮助

Closed this issue · 3 comments

w9595 commented

尊敬的作者您好:
感谢您出色的工作,为域适应技术的发展做出了贡献。拜读了您的文献和代码,我有一点问题。我搞不清楚您文献中提到的领域对抗和类别对抗以及梯度反转时两个对抗的调节在代码中时如何体现出来的。请您不吝帮助我解答这个疑问。谢谢您。我的邮箱是2436934743@qq.com
祝一切顺利!

Hi @w9595

Thanks for your attention!

First of all, due to one GRL between domain discriminator and feature generator and the other GRL between two classifiers and feature generator, we only need backward once during the training.

total_loss = classifier_loss + loss_params['domain_off'] * transfer_loss \
- loss_params['dis_off'] * inconsistency_loss
total_loss.backward()
optimizer.step()

Second, w.r.t. the implementation of two GRL, the following lines might be helpful!

  1. GRL

    class ReverseLayerF(Function):
    r"""Gradient Reverse Layer(Unsupervised Domain Adaptation by Backpropagation)
    Definition: During the forward propagation, GRL acts as an identity transform. During the back propagation though,
    GRL takes the gradient from the subsequent level, multiplies it by -alpha and pass it to the preceding layer.
    Args:
    x (Tensor): the input tensor
    alpha (float): \alpha = \frac{2}{1+\exp^{-\gamma \cdot p}}-1 (\gamma =10)
    out (Tensor): the same output tensor as x
    """
    @staticmethod
    def forward(ctx, x, alpha):
    ctx.alpha = alpha
    return x.view_as(x)
    @staticmethod
    def backward(ctx, grad_output):
    output = grad_output.neg() * ctx.alpha
    return output, None

  2. GRL applied between domain discriminator and feature generator

    x = ReverseLayerF.apply(x, alpha)

  3. GRL applied between two classifiers and feature generator

    xt = ReverseLayerF.apply(xt, alpha)

I'm closing this issue. Pls feel free to ping me if there are further questions.

w9595 commented

尊敬的作者您好:
感谢您的回复与热心解答!我正在努力试图看懂您的论文和相应的代码。我的代码能力比较弱,还有些问题就是您的论文中提到的关于领域对抗和类别对抗的那些损失函数的公式,JADA的总体目标函数公式(8),以及公式(9)-(12),这些公式在代码中是如何体现出来的?公式中有哪些参数是可以调节的?哪些参数是为模型的优化提供了贡献的?
恳请您拨冗帮助我解答疑问。谢谢您。
祝您万事顺利!