SeungjunNah/DeepDeblur-PyTorch

which one is the best loss func?

Closed this issue · 1 comments

hi. I'm a huge fan of yours.

which loss function is the best in your example and what's the difference?

eg.)
python main.py --n_GPUs 1 --batch_size 8 --loss 1L1+1ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1L1+3ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1L1+0.1ADV

i used REDS datasets for train.

thanks.

Hi @rascaliz,

Usually, the L1 loss on training data drops to around 10 and it's up to your choice to balance the losses by the target.
I guess 0.1 would be too small and values around 1 ~ 3 would work.
And the default batch size of this code is 16. Memory would explode if you use a single GPU.
A possible workaround is to use the NVIDIA Apex for mixed-precision training as it could reduce memory usage.
So in your case, the possible commands would be:

# single GPU with batch size 8
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+1*ADV --dataset REDS --do_test false --endEpoch 200 --milestones 100 150 180

# single GPU with batch size 16 by using Amp (install Apex as written in README)
python main.py --n_GPUs 1 --batch_size 16 --loss 1*L1+1*ADV --dataset REDS --do_test false --endEpoch 200 --milestones 100 150 180 --amp true

# 2-GPU with batch size 16
python launch.py --n_GPUs 2 main.py --batch_size 16 --loss 1*L1+1*ADV --dataset REDS --do_test false --endEpoch 200 --milestones 100 150 180