some question with this code
xiaoyangx0 opened this issue · 3 comments
I noticed opacus is not supported batchnorm2d,so we should use convert_batchnorm_modules to convert batchnorm2d module to groupnorm. In this way, we can not use batchnorm statistics to conduct grad_attack, so how can solve this question. Thanks for your reply.
Hi,
Thanks for the question!
Yes DPSGD by nature does not comply with BatchNorm and we've supported the conversion from BatchNorm layers to GroupNorm (you would need to disable the comment mode to allow the conversion):
GradAttack/gradattack/defenses/dpsgd.py
Lines 140 to 142 in 4496f57
To launch the gradient inversion attack with DPSGD, you may want to design your own regularization term for GroupNorm statistics. The regularization term for BatchNorm statistics may be a good reference:
GradAttack/gradattack/attacks/gradientinversion.py
Lines 325 to 334 in 4496f57
Happy to answer further questions if any :)
Best,
Yangsibo
Thanks for your reply, I noticed that the attack model and target model are all in evaluation state
if eval_mode is True: self.eval() else: self.train()
but if we set it to training state ,It will lead to a poor result compared with the result in the paper.
In the evaluation status, it means that we have used the previous batchnorm module information,which is a strong assumption in realistic application.
This is my question, thanks for your answer.
Hi,
Thanks for the follow-up question!
but if we set it to training state ,It will lead to a poor result compared with the result in the paper.
You are definitely right about this, and this is one of the main takeaways from Section 3 of our paper.
Also, please note that the main results (Table 2) we reported in the paper are for the strongest (and unrealistic) setting where the attacker has access to
- BatchNorm statistics of the private batch
- labels of the private batch.
We evaluated such a scenario as it helps us understand the upper bound of the realistic attack performance.
Best,
Yangsibo