norm[-1,1]for net='vgg'?
DISAPPEARED13 opened this issue · 5 comments
I found that you mention that we need to normalize the RGB images range into [-1, 1], but I saw that example worked at when net='alex'
, I wanna know that if this work for net='vgg', too?
Because I found some comments in https://discuss.pytorch.org/t/how-to-preprocess-input-for-pre-trained-networks/683/2 , @smth said that input should be norm to [0, 1], and now I am confused...
And I noticed that you also load your own pretrained network by giving "net" and "version", are you just use the framework defined by torchvision.model.vgg16 and then you trained it with ImageNet by norm to [-1, 1]?
sorry to bother and thanks a lot! : )
I saw the relative question opened at #83 , and @richzhang said, the mean and std is for [-1, 1], I wanna know if our data do from grayscale to RGB with repeat single slice to 3 channel, should i just remove the ScalingLayer when LPIPS.forward?
Thanks!
I think you can scale the data to [-1, +1], and repeat to 3 channels, and keep the scaling layer
I think you can scale the data to [-1, +1], and repeat to 3 channels, and keep the scaling layer
Thanks for replying! and is it properly that I use that std and mean value to scale my own grayscale medical image? or change the parameters in the layer?
I think just keep the parameters, but there's no 100% correctly solution. This network was pretrained on Imagenet classification of natural images, so there's no real guarentee it works well for medical images
I think just keep the parameters, but there's no 100% correctly solution. This network was pretrained on Imagenet classification of natural images, so there's no real guarentee it works well for medical images
Thanks alot. I think it maybe a explainable option to do so with different distribution like natural imges and medical images.