AliaksandrSiarohin/video-preprocessing

Why not just crop the faces with their meta data from VoxCeleb since we already have the face bboxes?

Cold-Winter opened this issue · 3 comments

Thank you for the elegant implementation. It helps a lot!

I am wondering why you need to detect the faces from the VoxCeleb dataset since we already have the face bounding box meta data in this dataset? Are you trying to crop tighter face bboxs instead of using their boxes? What if we train the first order model with the faces cropped by their boxes?

Any update on this?

Same question

Same question. Besides, it seems that the provided bounding box is not a square bounding box.
For example, the bounding box has a size of (1018 - 648, 553-48), i.e, (370, 505). However, this code directly resizes this rectangle image to a square one, as in here.

image