Pytorch Lightning version 1.3.1 vs 1.5.1
saljanahi opened this issue · 5 comments
Hi,
The requirements file lists Pytorch-lightning 1.5.1, however pip install Gradattack has pytorch-lightning 1.3.1 which has the deprecated manual backward being used. Can you please confirm which pytorch-lightning version to use?
Also, I've had issues replicating the baseline you achieved for the no defense method, where the reconstructions are very poor (even with a pretrained Resnet18 for 200 epochs, with 93% test accuracy, BN exact, and all the other recommended hyperparameters in your paper). Could I be missing an argument or something else?
Hi,
Thanks for posting the issue! Pytorch-lightning 1.5.1 should work.
In terms of the experiments that you have trouble reproducing, would you mind sharing the exact bash script you use to run the attack? This would help us identify the issue and help you to resolve it. Thanks!
Hi :),
I have been using Pytorch-lightning 1.3.1 since with Pytorch-lightning 1.5.1 I keep getting an issue with manual backward:
Traceback (most recent call last):
File "attack_cifar10_gradinversion.py", line 213, in
run_attack(pipeline, attack_hparams)
File "attack_cifar10_gradinversion.py", line 199, in run_attack
attack_trainer.fit(attack)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in fit
self._call_and_handle_interrupt(
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in _run
self._dispatch()
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1272, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1282, in run_stage
return self._run_train()
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1312, in _run_train
self.fit_loop.run()
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
self.epoch_loop.run(data_fetcher)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 195, in advance
batch_output = self.batch_loop.run(batch, batch_idx)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 90, in advance
outputs = self.manual_loop.run(split_batch, batch_idx)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/manual_loop.py", line 111, in advance
training_step_output = self.trainer.accelerator.training_step(step_kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 216, in training_step
return self.training_type_plugin.training_step(*step_kwargs.values())
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 213, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/saljanahi/GradAttack/gradattack/attacks/gradientinversion.py", line 369, in training_step
reconstruction_loss = self.optimizer.step(closure=_closure)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
loss = closure()
File "/home/saljanahi/GradAttack/gradattack/attacks/gradientinversion.py", line 361, in closure
self.manual_backward(reconstruction_loss, self.optimizer)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1425, in manual_backward
self.trainer.accelerator.backward(loss, None, None, *args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 311, in backward
self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 91, in backward
model.backward(closure_loss, optimizer, *args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1444, in backward
loss.backward(*args, **kwargs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/autograd/init.py", line 140, in backward
grad_tensors = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))
File "/home/saljanahi/.local/lib/python3.8/site-packages/torch/autograd/init.py", line 65, in _tensor_or_tensors_to_tuple
return tuple(tensors)
TypeError: 'Adam' object is not iterable
For the no defense experiment, I'm loading a checkpoint of a Resnet-18 model pretrained with CIFAR-10 and with a 94% test accuracy. My command + hyperparameters are below:
python3 attack_cifar10_gradinversion.py --batch_size 1 --BN_exact --tv 0.01 --gpuid 3 --bn_reg 0.001
{'reconstruct_labels': False, 'signed_image': False, 'mini': False, 'large': False, 'BN_exact': True, 'attacker_eval_mode': False, 'defender_eval_mode': False, 'total_variation': 0.01, 'epoch': 0, 'bn_reg': 0.001, 'attack_lr': 0.1}
Global seed set to 1234
With the same 10000 iterations for the attack. I varied the bn_multiplier between 1 and 10 but it didn't make much of a difference. I'm trying to replicate the high reconstruction quality of the no defense and single image batch strongest attack case in the original paper. Thank you for your support :)
Hi there,
I've been trying several variations with different pretrained and untrained models, with the hyperparameters for the strongest attack and no defense. I'm unable to achieve the high reconstruction accuracy of the strongest attack - no defense setting in the paper (and original inverting gradients paper). Do you have any recommendations on what I might be doing wrong? Or could you provide an example file to replicate it?
Thanks,
Hi,
I resolved the compatibility issue with pytorch-lightning==1.5.1 by removing optimizer from manual_backward in line 361 of gradientinversion.py.
However, I'm still unable to replicate the very good LPIPS of the baseline, no defense strong assumptions attack scenario on ResNet18 with CIFAR10 that was in the original paper. Could you share the ckpt that you used for it? I shared the details of hparams and model training earlier above.
Hi Sulaiman,
Thanks for your question! I think the problem could be that you were using a relatively late epoch for evaluation, where the model almost converges and yields close-to-zero gradients.
FYI, our previous evaluation was conducted with models at earlier stages (e.g. after 1 or 2 epochs).
Let me also try to fix the pytorch-lightning requirement problem and get back to you later :)
Best,
Yangsibo