training on own data
Ademord opened this issue · 0 comments
Hello, I just found out about your repo and I am interested to use it on my own data. I had some questions I solved on my own. I was wondering if could you help me use RoutedFusion in a real-time setup, where I feed the camera frames to add to accumulated scan? something in this direction maybe:
pos = torch.from_numpy(extract_pcd(depth_image)[0])
data = Data(pos=pos, batch=torch.zeros(len(pos)).float())
data = transform(data)
model.set_input(data, device="cuda")
output = model.forward()
For some context, I am trying to find a way to motivate an RL agent (Unity) to discover a point cloud (or scan it). So I need a way to store the PC, accumulate it / aggregate it when new scans arrive, and then reward the agent the more "new points" that arrive. < this is the part where I need to figure out what method to use to store a point cloud and augment it with new scans.
So I am still super confused with all these methods that exist. I found tsdf-fusion but I am yet to try it in my "real-time setup". On my current list left to try is:
- tsdf-fusion,
- RoutedFusion,
- panoptic segmentation (will not reward based on point cloud discovery but on yes/no detections),
- torch3D base point cloud registration approach
- P4Transformer
I tried openSFM and openMVG but they didn't show results and they are offline methods anyway as far as I understood.