When I test your pre-trained model, I found some problems
Closed this issue · 5 comments
It's a great work. When I test your pre-trained model, I found some problems, I look forward to your answer, thank you!
- When I test the picture "04.JPG" in the "test" folder, I found that the test result were inconsistent with the result in your paper or github page.
- When I checked your code again, I found that the code principle seems to be inconsistent with the description of the paper. For example, in "evaluate.py", the description of image fusion is not mentioned in the paper? However, this part of the code does.
The paper mentioned above is your published paper:https://dl.acm.org/citation.cfm?id=3350926
The reflectance restoration net will generate over-exposure results, thus, we fuse the results with original inputs to relieve this problem. In fact, this fusion process is common in many low-light image enhancement algorithms, so we don't mention it in our paper.
We have changed our fusion process in released code. It can work well for many low-light images. You can design your fusion processes to get better results. Thank you!
Thanks for your reply. Thank you for solving my second question, but my test result are inconsistent with the result of your paper (question 1). Can you explain? Thank you!
The pre-trained model has changed. Thus, the PSNR and SSIM are also different with our paper. The value of PSNR on the LOLdataset is lower while the value of SSIM is higher compared with our paper. You can adjust the illumination map to get higher PSNR value.
This change has slightly effect for our results. Thank you!
OK, thanks for your reply. However, I think this will cause others to misunderstand when testing, comparing or citing paper.