hukkelas/DeepPrivacy

output image resolution

jim-bo opened this issue · 1 comments

Hello,
I was wondering how the resolution of the output images is chosen? Is this a parameter of the model, which could be trained to output higher resolution images?

Alternatively do you think it is reasonable to cut-out the face from a higher resolution image and input this into your program, then re-integrate into the original resolution image? If so, how much context surrounding the face would be good to include in these subsets?

My goal is to benchmark how this tool works on high resolution images.

Thanks,
James

Hi, sorry for the late answer!

I was wondering how the resolution of the output images is chosen? Is this a parameter of the model, which could be trained to output higher resolution images?

This is a parameter chosen before training. Currently it only supports 128x128 faces, but we are looking into training larger models.

Alternatively do you think it is reasonable to cut-out the face from a higher resolution image and input this into your program, then re-integrate into the original resolution image? If so, how much context surrounding the face would be good to include in these subsets?

Yes, we are cutting out faces from higher resolutions images. The strategy that we cut with is that approximately 60% of the image should be background (or 40%, don't remember exactly). How we go from the original face detection bounding box, to the extracted area is shown in the code here.