martinruenz/maskfusion

How to obtain 3D semantic map, if offline is ok?

hemath1001 opened this issue · 0 comments

Hi,
Could you give me some more details on how to obtain and save the 3D global semantic map as you shown in the video (if I do not concerned about running online or offline)? I haven't found the corresponding parameter to do this, so is it done any other way?

Besides, I am not quite clear about some points. Here's what I think and please correct me if I'm wrong:
After the 3D global semantic map is reconstructed and a new frame comes in, Mask-RCNN is applied to it and with the help of depth map, the segmentation of this frame is obtained. The multiple objects recognized in this frame will try to find their corresponding 3D models in the 3D global map individually (I'm not sure about this process), and then "If the association is successful the 3D model is updated, otherwise yet another 3D model might be created" as you answered in another question.
So, if two adjacent frames comes in, the segmentation information flows as follows:
segmentation in frame 1 → corresponding 3D model in reconstructed map (if exists)
segmentation in frame 2 → corresponding 3D model in reconstructed map (if exists)
And if the corresponding 3D model is the same, the object in two adjacent frames is tracked and mapped. Is that how it works? It seems require heavy computation and I'm quite not sure about this. Will the traditional feature points help tracking and mapping?

Thanks a lot :-D
Have a wonderful day~