Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information
Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information
ECCV 2024
Given a Structure-from-Motion model, we aim to learn the camera viewpoint that can be employed to maximize the accuracy in visual localization.
Our methodology requires first sampling the camera locations and orientation, calculating the best visibility orientation for each location,
and learning active viewpoint through a Multi-layer Perceptron encoder. The illustration above shows our full pipeline predicting active viewpoints for visual
localization embedded into a planning framework.
We will soon release the code; we need time to clean it and make it understandable! For now you can check our preprint
- accepted ECCV 2024.