IceClear/CLIP-IQA

Code without OpenMMLab integration?

justlike-prog opened this issue · 2 comments

@IceClear Any chance to get the code without using OpenMMLab? Would make it easier for experimenting. Thanks for the awesome work by the way.

Or rather it would be nice to know which pytorch transforms one needs to reproduce the results given an image. The OpenMMLab transforms are quite cryptic and differ from the torchvision ones. Were the results of paper based on the mmcv transforms? Right now I am doing the following, but my results are scewed a bit:

transform = transforms.Compose([transforms.ToTensor(),
                            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                            ])

img = Image.open("./test.jpeg").convert("RGB")
img = transform(img)
img = img.unsqueeze(0)

Hi, you may refer to IQA-pytorch, which also supports CLIP-IQA :)
It should be easy to use.