wbhu/DnCNN-tensorflow

Issue about the Loss

quangnguyenbn99 opened this issue · 7 comments

Dear Mr/Mrs

I am kindly considering about the loss parameter of the code. The later only stops from 3.4 to 3.6, then it doesn't decrease to around 0 . Hence, I write this topic to give a question that: " am I running the code correctly?"
I am so sorry that if my stupid question disturb you.
Thank for your attention very much.

wbhu commented

Hi @NathanielNguyen11 ,

In my machine, the converged loss is about 3.4 too. And, the converged loss is about 1.2 in the original matconvNet implement. I am trying to find why, but it may be slow since I have something else having to do. If you find the reason, welcome to pull a request.

thanks.

i changed the activation function in layer which is originally relu to leaky relu, and the loss converges to about 1.2. Hope that helps you~ @crisb-DUT

Hi,
I am trying to train this model using my own noise data but i am seeing same issue. Even after running for 12hrs on GPU but final error came to only 3 with Adamoptimser and 2.5 using AdamDelta optimiser. Please suggest if you the actual error. As it is mentioned that chnage relu to leaky-relu to get error of 1. But is it sufficient to get performance comparable to matlab performance with such trained model.
Regards,
Sumit Jha

wbhu commented

Thanks @edogawachia. I will have a try if there is any free time.

@NathanielNguyen11 I noticed that the last conv layer don't use BN and relu in paper, and I changed the code so the training loss decreased less than 1.2.

wbhu commented

Thanks @lizhiyuanUSTC ,

You are right. I have merged your pull request. Now I am training the new model.

wbhu commented

Hi all,

I have trained and tested the new model, and it has the same Gaussian denoising performance at noise level 25 on BSD68 test set.

Thanks for all of your help.