badripatro/SpectFormers

weight = torch.view_as_complex(self.complex_weight)

Opened this issue · 0 comments

Hello, thank you for your masterpiece, I am deeply inspired by your paper. I have a question for you. If each image in the input training set is adjusted to 3×224×224 for model training, When the trained model "weight = torch.view_as_complex(self.complex_weight)" is a weight matrix that applies to 3×224×224. When it comes to testing, the test set in some areas needs to be 3×512×512 images, and the weight at this time is only applicable to 3×224×224 images, not 3×512×512 images. If the reshape () or resize() functions will change the 3×512×512 image to 3×224×224 for testing, which will affect the test accuracy in the field of compressed sensing, do you have any good suggestions for accurate testing without changing the image size during testing?
Thank you very much for your answers!