We present an algorithm to recover 3D trajectory of a camera using feature based sparse slam using a single camera (also called MonoSLAM).
1. Feature extraction We used the feature trackers present in the opencv library (goodFeaturestotrack and ORB feature tracker).
2. Feature Matching We calculate the distance between all the two points using brute force approach. We estimate the 2 nearest neighbour of every point using knn clustering and eliminate the points/pairs using ratio test. In the next step, we estimate the essential matrix using 8 point algorithm. For that we convert the image coordinates to camera cordinates and check for a minimum of 8 feature points. We eliminate the outliers using thresholding and ransac.
3. Pose estimation We estimate the pose of the current frame using the Essential matrix. See references[2].
4 World coordinates Provides the world coordinate by using 2 projection matrices as input using triangulation.
5.1 Display2D We used pygame and openGL to display the feature points and their mappings across frames.
5.2 Display3D We use pangolin to display a 3D map of the camera poses and world coordinates of the feature points.
Driving stock video
- opencv
- openGL
- pygame
- pangolin
- another link for pangolin, this has fewer modules
- multiprocessing
- matplotlib
- numpy
- skimage
The code is in Python3
See the respective links in code dependencies
python3 main.py
-
A nice gitgub repository that has the implementation of monocular slam and a youtube video explaining the code by the same guy.
-
A paper describing the computation of rotation matrix and translation vector from Essential matrix.
-
Nice paper by David G. Lowe on Lowe's ratio test