SSL92/hyperIQA

about data process

DIY-Z opened this issue · 0 comments

DIY-Z commented

hyperIQA/data_loader.py

Lines 31 to 35 in 685d4af

elif dataset == 'koniq-10k':
if istrain:
transforms = torchvision.transforms.Compose([
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize((512, 384)),

sample.append((os.path.join(root, '1024x768', imgname[item]), mos_all[item]))

From the two code snippets above, it is evident that the data is loaded as images with a size of '1024x768', with width and height dimensions of 1024 and 768 respectively. However, the Resize((512, 384)) operation rescales the dimensions to 512 and 384, resulting in a noticeable change in the aspect ratio from the original 768:1024 to 512:384. I'm curious if the same processing is applied in the experimental setup of the paper?

Plus: according to the document of pytorch Resize,the 'size' parameter of the Resize function refers to the height and width.