Question about training
vectorzwt opened this issue · 5 comments
Hello Benjamin!
Thanks for your awsome jobs! I have read the training tutorials you present, but all of them contains label maps. But I just want to do super resoluiton, so I do not have the label. I just have the LR images and their corresponding HR images. Can you give me some training advice? Thank you!
I am also trying something similar and am wondering if there is any idea on how to proceed!
I also have a question on how to generate the following files upon training: generation_classes.npy, generation_labels.npy, prior_means_t1_hr.npy, rior_means_t1_lr.npy, prior_means_t2.npy, prior_stds_t1_hr.npy, prior_stds_t1_lr.npy, prior_stds_t2.npy
Hi all!
First of all, thanks for your interest in our work.
@vectorzwt @xdavidg I think there's a slight misunderstanding here. SynthSR is not a typical super-resolution framework, where you train a model to regress HR images from LR images. If you do this, the model is going to be very good at doing super-resolution on images similar to your training data, but it will no generalise to out-of-distribution data (domain gap problem). This is why we propose here to generate the training data (LR-HR pairs), and we do so by using label maps as inputs and using a parametric generative model (i.e. not learning-based). Crucially, we propose to randomise the parameters of this generative model, such that we obtain images of random contrast and resolution. By training the network with these images of randomised aspect, we force the network to learn domain-agnostic features, such that it can super-resolve images of any contrast and resolution.
All of this to say that you need label maps to train SynthSR, because using real images is only going to make the network work well on similar images.
@aremirata this is all explained in the tutorials https://github.com/BBillot/SynthSR/tree/main/scripts/tutorials
Hope this helps
Benjamin
Got it! Thank you very much. I will close this issue.