Inference results conflict?
innat opened this issue · 3 comments
Interesting work. Thanks for sharing.
However, I have tried to inference on some samples with the provided pretrained weights. But for some reason, I couldn't get the expected results. I tried with both executable files and also source files. For example, while using source files, I have checked both weights (DF2K.pth and DPED.pth). Please see below:
Input
Using DF2K
Using DPED
Now, as it's demonstrated the visual results here, I expected the same or reasonable results. Any catch?
Apart from this, here is another issue, in the testing time, isn't it possible to place multiple low-resolution images at a time? For example, as demonstrated here, if I place (let's say) 5 images in 'dataroot_LR' -- test images dir
; it throws CUDA out of memory: RunTimeError
.
you got an oom error as the error message suggested... what's your image resolution and GPU memory... you can try to input 5 100x100 resolution images and it should be fine.
I got errors for trying to inference more than one image. Let's forget that for now. I'm getting the above result from one image, any catch?.
your LR input image has some blocking artifacts already and is prob different from the distribution of the LR training data. can you try to downsample the image by a factor of 2 or 4 and try again? those blocking artifacts might be surpressed by downsampling.