Question about Bayesian update.
canglangzhige opened this issue · 2 comments
In the paper it is mentioned "we use a Bayesian update to update the label probabilities at each voxel". But in the experimental part, I saw that the point cloud was directly generated by the segmented RGB image and depth image, and then passed to voxblox for semantic reconstruction, and "Bayesian update" was not used. What I understand is that the experiment Part of it just replaces ordinary RGB images with semantic images, and does not perform semantic fusion operations. Is there any experiment in this area?
I have a few other questions:
- "accuracy" and "completeness" are used to assess the quality of the 3D meshes. What software are you using? Can you provide the relevant code?
- Ground truth is used to estimate the mesh in Section III-C, but I don not find GT point cloud within the kimera_semantics_demo.bag:
- "mIoU" and "Acc" are used to analyze the semantic performance. But these two metrics are generally used to evaluate two-dimensional semantic segmentation. How do you use it to evaluate three-dimensional semantic information?
Bayesian update is used to 3D fusion I think. The semantic point cloud is generated by semantic and RGB image, then the framework will do raycasting and update the label in each voxel using Bayesian update.