Enhanced images with light sources have artifact
beyzacevik opened this issue · 3 comments
Hello,
Thank you for your contribution. I am conducting an experimental research on image enhancement. I've inferred results for a small dataset. I realized that the light source on the one of the results have a 'saturation artifact'. Do you know what causes this artifact and ow to eliminate it? May be clipping the output works. What do you think?
Hello! When a region is fully saturated (such as the light source in your example or the failure case in our paper), our model may produce unexpected output when suppressing overexposure. Because LCDPNet is essentially adjusting the exposure and color of the original pixels (by illumination map) rather than doing the inpainting.
A possible solution is to extend the training dataset. For the purpose of generalization, we built our current dataset based on Adobe5k and RAISE to cover diverse scenes, but maybe there are not enough images with the light source in our dataset. So extending more input-GT pairs with the light source may help.
Can you give me some guidance on how your code works when reproduced?