czq142857/BSP-NET-original

Experiment on Semantic Segmantation (ShapeNetPart)

SilvioGiancola opened this issue · 3 comments

Hi @czq142857 ,
Thank you for sharing for great work!
I was looking into the semantic segmentation experiments from you paper. Have you released the code to replicate your results? I would like to try it out, but happened not to find it. Also, are you learning to classify convexes into parts, or were the convexes manually assigned to the most representative part? Any detail would be appreciated.
Thanks,
Best,

Hi Silvio,

The segmentation experiments were done using the same code provided in this repo. The only difference was that for segmentation experiments we trained individual AE models for different categories, not the same model for all categories.

Note that the network does not output semantically segmented shapes. It only outputs segmented shapes that are co-segmented by different convexes. The semantic grouping was done as post-processing, either manually (Fig. 6 in our paper) or using a script together with some ground-truth segmented shapes as references (Fig. 7 in our paper).

Please check our paper for details. "Segmentation" part is at the top left corner of page 7.
https://arxiv.org/abs/1911.06971

Best,
Zhiqin

Hi Zhiqin,

Thank you for your answer, it clarified my doubt I had when reading the Segmentation section of your paper.

I noticed that the dataset you are providing in this repo only contains reconstruction information, I could not find the part label information for each points/voxels. Do you happen to have a similar dataset containing the part labels? Or maybe could you share the code you used to create your h5 files from the ShapeNetPart dataset? Are the per-points part labels from ShapeNetPart (https://www.shapenet.org/download/shapenetsem) aligned with your voxel data?

Any insight to link the part labels with the data you provide would be appreciated.

Best,
Silvio

Hi Silvio,

Please find the segmentations here in the form of point clouds and per-point labels:
https://cs.stanford.edu/~ericyi/project_page/part_annotation/

The data should be aligned. You can visualize the point clouds and the voxels to be sure.
The dataset in my repo was aligned by normalizing each shape so that the diagonal of the shape's bounding box equals to unit length 1.

I did not create hdf5 files from the point cloud segmentation data since the data was only used in evaluation, not training. However, you may find some useful usage information in this repo:
https://github.com/czq142857/BAE-NET

Best,
Zhiqin