How to get the partial UV texture map?
Opened this issue · 3 comments
Hi! Thanks for releasing the code.
I have a few questions on how to get the partial UV texture map.
In Section 3,1, as mentioned in the paper, the IUV map of an input image is predicted by using the ResNet-101 based variant of DensePose, and "For easier mapping, the 24 part-specific UV maps are combined to form a single UV Texture map Ts in the format provided in the SURREAL dataset [53] through a pre-computed lookup table." I am trying to train the proposed model on my own dataset, and I have already got IUV maps, but I do not know how to implement such a mapping operation to get the partial UV texture map as described in the paper. Could you please provide some demo code or other GitHub repo to show how to get the partial UV texture map?
Thank you! :)
Hi, I also have some questions about the pre-processing of the dataset.
-
Why and how clothing images (apparels images) are used? I guess those clothing images are utilized for performing virtual try-on? Maybe? but I am still confused about the process of
_map_source_apparel_on_target
in the feature_render.py. If apparel images are RGB clothing images in the dataset, how to render the source_texture with a clothing image without UV coordinates? I guess, maybe apparel images are also pre-processed and have been mapped into UV texture maps? -
According to the code in the process of
_map_source_apparel_on_target
(feature_render.py, line 123),background_mask
is extracted from theI
component of the IUV map whereI==0
and apparel_mask is extracted for theI
component with indices including 2, 15, 16, 17, 18, 19, 20 and 22 (BTW, what are the semantic meanings of those human body parts?).identity mask
is obtained by usingtorch.logical_not()
function on theapparel_mask
, (i.e.,identity_mask = 1 - apparel_mask
). Then theidentity_masked = target_image * identity_mask * background_mask
, andapparel_masked = mapped_source_feature * apparel_mask * background_mask
, then the function will returnmapped_apparel_on_target = apparel_masked + identity_masked
. However, the background_mask is actually the foreground mask (withtorch.logical_not()
), and theapparel_mask
should be used to mask out the foreground regions which are not in the human body parts. It seems theapparel_mask
andbackground_mask
are used to select the regions within the foreground and human body parts but without clothing, and the second term is trying to select the human body parts with clothing. and then merge them together. Is that correct? could you please clarify those operations here, and how could the input apparel image (clothing image) be directly merged to the source feature maps?
Thank you! :)
Hi Andrew,
Please visit these links listed below to get the solution for your problem related to the texture map. The first link has a set of notebooks where different applications of densepose are given, and in the second link you will see how to get the texture map along with that, someone shared a library also to achieve that. You can check the source file of that library to understand the total working.
https://github.com/facebookresearch/DensePose/tree/master/notebooks
facebookresearch/DensePose#116
Hi, @rubelchowdhury20, thank you for your reply! I have obtained the partial UV texture map according to the above notebooks, thanks a lot! But I notice that the size of the input image and IUV map can influence the quality of the partial UV Texture map. I wonder what are the sizes of the input image and corresponding UV texture maps in your experiment settings? and whether or not the quality of the partial UV Texture map can impact the final performance?
Thank you! :)