2d Semantic segmentation labels
weixuansun opened this issue · 4 comments
I am using the 2d semantic maps in the dataset, it is mentioned in the paper that 2d semantically label is projected from 3d semantic point cloud.
Bur not sure how was this projection implemented detailedly, for example in the image below.
The white bookcase outside the door is not included in the label, it looks like an error annotation. I wonder is the point cloud outside the room not considered when generating 2d semantic label?
The 2D semantic labels were created in Blender by rendering directly from the semantic mesh.
So long as the objects are unoccluded, they should be visible.
How were these black + white images created? Also, could you provide the name of the corresponding RGB and semantic images?
Thank you for your quick reply, I found I made a mistake when I try to create a binary mask for a single class in an image. I will close the issue.
Btw, I wonder what is the efficient way to generate mask for one class in semantic map just like semantic-pretty? should we loop the colors in semantic_label.json to find matched color?
the above image's name is camera_08aa47684e3948558b6d23cdc7ec31b3_office_21_frame_3_domain_rgb.png
I also want to know how to find matched color. Sir, have you solve this solution? @weixuansun
I also want to know how to find matched color. Sir, have you solve this solution? @weixuansun
Hi, you can refer to this issue #6.