tsingqguo/efficientderain

colorful stain in the predicted picture and question about loss

dongwhfdyer opened this issue · 2 comments

colorful stain comes around in the birght side of the predicted picture

Firstly, really thank you for your code implementation and brilliant idea. I am using it for my project. However, I encountered some problems here. Any help is appreciated!!

As shown the picture below, left is input, middle is predicted, right is gt.
The model mysteriously conjures up some bright color in some places especially the white area. Did it happened to you? How can I solve it? Or it's basically the inherent problem of the model?
Even if I did some normalization to input images, it couldn't get any improvement.

            transform_list += [transforms.Normalize((0.3908, 0.3859, 0.3637), (0.2434, 0.2473, 0.2440))]

image
Here is the loss curve. The above issue almost happened from the beginning
epoches. I let it run for almost 1000 epoches. Loss didn't change since 100th epoch. Why loss converges so quickly? Can the model learn from the stagnate loss? I don't understand. How many epoches did you run before? Is there any trick to train the model?
image
Here is my model parameters.

python ./train.py ^
--baseroot "./datasets/video_collection_25/" ^
--load_name "" ^
--multi_gpu "false" ^
--save_path "./models/models_video_coll_25_04072000" ^
--sample_path "./samples_kuhn/models_video_coll_25_04072000" ^
--save_mode "epoch" ^
--save_by_epoch 10 ^
--save_by_iter 100000 ^
--lr_g 0.0002 ^
--b1 0.5 ^
--b2 0.999 ^
--weight_decay 0.0 ^
--train_batch_size 110 ^
--train_batch_size 16 ^
--epochs 2000 ^
--lr_decrease_epoch 500 ^
--num_workers 0 ^
--crop_size 128 ^
--no_gpu "false" ^
--rainaug "false" ^
--gpu_ids 0 ^
--no_flip

By the way, I added the visualizer module for your model using visdom. Would you like me to upload?

Now i find out that there are no activation function at last layer of the model.It might lead to that some parameters get out of range during inference. I simply added one layer of tanh(), It solved the problem.
self.tanh = nn.Tanh()
But I don't know if there is better one.

image

Hi, we did not find similar issues during our experiments. Thanks for the question and available solution. I will update the code.