Issues with Pre-Trained model parameters
Opened this issue · 1 comments
We used ldr2hdr.npz parameters from the drive link of the pre-trained model. The problem we are facing is that the image we use as input comes out as the output without any changes. The model does not seem to be doing any computation on the image at all.
Also, in the folder structure you have specified, "input --- input ldr images" and "samples --- ldr results", I see the opposite. The LDR jpg images (sky is blown out) are in the samples folder whereas the HDR jpg images (clouds visible) is in the input folder. Am I wrong?
I would like to know more about the training images you used to build your pre-trained model. If I can have that dataset, that will be highly appreciated. I am working on a similar project and I am using this paper as a baseline. My project is limited till the work done in "HDR image reconstruction from a single exposure using deep CNNs"
I encountered the same problem that after ldr2hdr the image does not change. Did you manage to find the solution?