/mono-visual-odometry

Monocular Visual Odometry in NumPy

Primary LanguageJupyter NotebookMIT LicenseMIT

Monocular visual odometry

A Work in Progress implementation of visual odometry in numpy. Currently aided by some opencv functions. The plan is to remove these and implement with purely numpy.

Functions to implement in native numpy

  • goodFeaturesToTrack
  • ransac
  • FundamentalMatrixTransform
  • Brute-Force Matcher (BFMatcher)
  • Compute ORB feature descriptors
  • triangulate
  • extract_pose (Needs refactoring for deciding ambiguities)
  • fundamentalToEssential
  • make_homogeneous

Setup

conda env create -f environment.yml
conda activate pyvo

My suggestion is to open src/visual_odometry.ipynb in VS Code. Any web browser and jupyter notebook will work, but I find that VS Code make the nicest notebook environment.

Demo

Press 'q' to end the demo sooner than the full duration.

Feature matches

The red and green points represent the detected feature in the current and previous frame. A blue line is draw between these points.

Integrated pose plot

Red is the ground truth and green is the predicted pose

Point cloud

Open the point cloud in the open3d app. http://www.open3d.org/download/