A Work in Progress implementation of visual odometry in numpy. Currently aided by some opencv functions. The plan is to remove these and implement with purely numpy.
- goodFeaturesToTrack
- ransac
- FundamentalMatrixTransform
- Brute-Force Matcher (BFMatcher)
- Compute ORB feature descriptors
- triangulate
- extract_pose (Needs refactoring for deciding ambiguities)
- fundamentalToEssential
- make_homogeneous
conda env create -f environment.yml
conda activate pyvo
My suggestion is to open src/visual_odometry.ipynb in VS Code. Any web browser and jupyter notebook will work, but I find that VS Code make the nicest notebook environment.
Press 'q' to end the demo sooner than the full duration.
The red and green points represent the detected feature in the current and previous frame. A blue line is draw between these points.
Red is the ground truth and green is the predicted pose
Open the point cloud in the open3d app. http://www.open3d.org/download/