SeungjunNah/DeepDeblur-PyTorch

loss

Closed this issue · 4 comments

ahha, i have a question is about the loss , i see you set the loss default value is 1*L1, when i use the ADV will error , dataset is the GOPRO_Large,can you help me ?thx

What was the exact command you tried and what kind of error message did you encounter?

  • Beware that you should always put multiplier in the loss argument
  • You may need more GPU memory when using adversarial loss if that was an out-of-memory error. In that case, mixed-precision training is recommended. Currently, Apex is required for mixed-precision training. When PyTorch 1.6 is out, I will update the amp part to PyTorch native using torch.cuda.amp.

ok ,the lamda default value is 1e-4 ?dataset GOPRO_Large

Do you mean \lambda in equation 6?
No, the loss term used for this repository is different and default lambda is 0.
It is up to your choice to put adversarial weights.
As shown in usage examples, you can try several weights.

# adversarial training
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+1*ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+3*ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+0.1*ADV
python launch.py --n_GPUs 2 main.py --batch_size 16 --loss 1*L1+3*ADV

If you need to reduce memory usage, set --amp true.

# adversarial training
python main.py --n_GPUs 1 --batch_size 8  --amp true --loss 1*L1+1*ADV
python main.py --n_GPUs 1 --batch_size 8  --amp true --loss 1*L1+3*ADV
python main.py --n_GPUs 1 --batch_size 8  --amp true --loss 1*L1+0.1*ADV
python launch.py --n_GPUs 2 main.py --batch_size 16 --amp true --loss 1*L1+3*ADV

ok , i got it ,really thank you