Train Data Size & Inference Data Size
JeremyAlain opened this issue · 2 comments
JeremyAlain commented
Hi,
first of all great work!
Could you maybe explain what you used as an image data size for training your Unpaired GAN and for testing it? I mean even if you train on images of size 512x512, how are you able to then get the images that you present in your paper which are obviously not square images?
Here a few points:
- In your paper under the section "4. Generator" you write: "The size of input images is fixed at 512×512."
- But then in the Results section you write: "Although the model was trained on 512×512 inputs, we have extended it so that it can handle arbitrary resolutions".
What does that mean? You train it on an input of size 512 but then for inference you allow any kind of input? How does that work? I cannot find that part in your code. - In your Github description you write:" I directly used Lightroom to decode the images to TIF format and used Lightroom to resize the long side of the images to 512 resolution"
But that is not a 512x512 image then, only the longer length of it will be set to 512 right?
To me, these 3 points are kind of confusing, could you elaborate on them?
nothinglo commented
-
You can use padding for your input images to fit 512x512 and cropping for your output images.
-
This question is asked by other people plz take a look in other issues. (The implementation is in our website demo which that part I will release in the near feature.)
-
Yes.
JeremyAlain commented
thanks a lot :)