Why are the features extracted by VGG not used as loss terms
Closed this issue · 1 comments
shi991027 commented
Thank you very much for your work, but I found that in the training code for the Light-Effects Suppression section,ENHANCENET.py, it does not use VGG to extract features from the original and generated images and use this as a loss term, which is mentioned in the paper, why is this?
jinyeying commented
Thank you very much for your work, but I found that in the training code for the Light-Effects Suppression section,ENHANCENET.py, it does not use VGG to extract features from the original and generated images and use this as a loss term, which is mentioned in the paper, why is this?
refer to PerceptualLossVgg16ExDark(nn.Module) in demo.py