Details about Masks
Closed this issue · 3 comments
Hey, great work!
Could you please elaborate on face masks? I read the paper (not thoroughly to be honest), gone through code, but I couldn't find any clear example/information about it.
What model do you use to generate face masks?
There's face, hair and mouth masks in segmentation encoder module, but a clear example would be awesome.
face_map = np.zeros([parse.shape[0], parse.shape[1]])
mouth_map = np.zeros([parse.shape[0], parse.shape[1]])
hair_map = np.zeros([parse.shape[0], parse.shape[1]])
Well, we use the ground truth face segmentation label provided by CelebAMask-HQ.
One example is here:
with assigned label:
Label list | ||
0: 'background' | 1: 'skin' | 2: 'l_brow' |
3: 'r_brow' | 4: 'l_eye' | 5: 'r_eye' |
6: 'eye_g' | 7: 'l_ear' | 8: 'r_ear' |
9: 'ear_r' | 10: 'nose' | 11: 'mouth' |
12: 'u_lip' | 13: 'l_lip' | 14: 'neck' |
15: 'neck_l' | 16: 'cloth' | 17: 'hair' |
18: 'hat' |
Alternatively, one may use a pre-trained face parsing network to generate masks for private data.
Hi! May i ask about CelebAMask-HQ? There seems no ground truth segmentation labels for entire image in the downloaded dataset, but the separate masks for different regions (lip, neck etc) for the same image. In the inference, it needs a single mask label. right? Would you mind explaining it more precisely?
ur right. CelebAMask-HQ only has separate masks for different regions (lip, neck etc).
In the inference, we use encode_segmentation_rgb
to generate an RGB mask map, where only face_map
is used.