youngerous/ddgsd-pytorch

about network structure and loss function

Light-- opened this issue · 2 comments

Hello, thank you for sharing! @youngerous

I have some questions:

  1. in the paper, MMD loss is used before the FC (linear) layer and applied on the global feature of backbone output in figure2. It is called classifier in the figure. However, in your code implementation of L143 ~ L159, all losses are calculated on the output of FC layer. Why? No difference?

  2. in the paper, MMD loss is used, but in your implementation, MSE loss is used, are they the same things? have you compared them in experiments?

  3. Based on your implementation, the predictor in the figure is not a network module, but only a softmax transformation, is that right?

  4. in table3 of the paper, cifar-100 top1 error of Resnet18 baseline and ddgsd are 23.45 and 21.47, but in your shared reproducing results are 30.15 and 26.60.

  • in your opinion, what has caused this large gap?
  • Have you ever reproduced result which is more closely to the paper's result based on your implementation? can you share them?

sorry for so many questions to bother you, look forward to your reply, thank you!

Hi, thank you for nice issue!
My comments are below:

  1. in the paper, MMD loss is used before the FC (linear) layer and applied on the global feature of backbone output in figure2. It is called classifier in the figure. However, in your code implementation of L143 ~ L159, all losses are calculated on the output of FC layer. Why? No difference?

My implementation is a little bit different from original. As you know MMD loss is applied to backbone output, but I applied it to logit for convenience. You can easily change ResNet return format of forward function(e.g. return two outputs: backbone output & logit) and calculate MMD loss in trainer.

  1. in the paper, MMD loss is used, but in your implementation, MSE loss is used, are they the same things?

MSE and MMD are different, as written in paper.
You can change that loss for rigorous reproduction.

  1. Based on your implementation, the predictor in the figure is not a network module, but only a softmax transformation, is that right?

Yeah that is what I understand.

  1. in table3 of the paper, cifar-100 top1 error of Resnet18 baseline and ddgsd are 23.45 and 21.47, but in your shared reproducing results are 30.15 and 26.60.
  • in your opinion, what has caused this large gap?
  • Have you ever reproduced result which is more closely to the paper's result based on your implementation? can you share them?

Unfortunately I didn't reproduce the author's results because I just focused on checking the effect of this concept.
So there are some difference in details. Like below(may be some more),
① I didn't use Pre-Act ResNet, just use ResNet. If you want to use Pre-Act ResNet, please refer this code.
② Using MSE loss instead of MMD loss could make difference.
③ I didn't consider concat phase in figure, because author didn't explain it explicitly.

And it was hard to know author's intent because they did not share their code.
Maybe I can try to reproduce the original performance later,
but I'm not sure I can do it in the near future because I'm doing another researches now :(
(I ask for your understanding🤣)

Have a nice day!

Maybe I can try to reproduce the original performance later,
but I'm not sure I can do it in the near future because I'm doing another researches now :(
(I ask for your understanding🤣)

Thanks very much for your quick reply. You've done great job already and I totally understand your situation. I'm also checking this paper's concept based on the paper and your implementation, but i didn't get any improvement on face recognition. I will be very grateful If you can further improve the code in the future, but it doesn't matter if you not. Thank you!

Have a nice day, too!