ankanbhunia/PIDM

About the training images

Closed this issue · 7 comments

Thanks for your great work!

I find the training dataset that you offer is 256x256,but the original image is 256x176,I want to know the way to turn 256x176 into 256x256.

Thanks again!

We resize the images. Specifically, we use Image.BICUBIC (antialiasing) as interpolation method in the resize function.

Thank you for your reply.
My expression may have misunderstood you.

I find the high resolution image is different from then 256 image, it seems like you added white area on both sides, and I find the left and right blank areas are not the same size in some images. I want to know how you did it.

The example is as follows: (WOMEN/Blouses_Shirts/id_00000001/02_1_front.jpg)

02_1_front

02_1_front

Thanks again!

I find trans_keypoins in data/fashion_data.py :
keypoints[:,0] = (keypoints[:,0]-40)
Does it mean if I want to train 512*352 , I can use the same pose.txt as 256 without keypoints[:,0]-40

I find the high resolution image is different from then 256 image, it seems like you added white area on both sides, and I find the left and right blank areas are not the same size in some images. I want to know how you did it.

Have you used the prepare_data.py file to create the lmdb dataset? Could you also let me know how you obtained the first image (the low-res one). It would help me to debug as I think there should not be any white space on the both sides.

I find trans_keypoins in data/fashion_data.py : keypoints[:,0] = (keypoints[:,0]-40) Does it mean if I want to train 512*352 , I can use the same pose.txt as 256 without keypoints[:,0]-40

The dataloader code would remain same. You just need to create a new lmdb using prepare_data.py. You can set your required image-size in the --sizes argument while running this file.

Thanks, I create a new lmdb using prepare_data.py and I am trainning successfully. By the way, how many epochs have you trained on 512*352

200 epochs | 300k iterations with batchsize:8 and no_gpu:8

THANKS!