/Obect_Tracking_System_Based_on_Lucas_Kanade

This project contains an implementation of video object tracking with conner detections, k-means key point clustering, and Iterative Lucas-Kanade (ILK) algorithm with pyramid hierarchy.

Primary LanguageJupyter NotebookMIT LicenseMIT

Obect_Tracking_System_Based_on_Lucas_Kanade

This project contains an implementation of video object tracking with conner detections, k-means key point clustering, and Iterative Lucas-Kanade (ILK) algorithm with pyramid hierarchy.

Features

This project aims at tracking salient feature points appearing in the first frame throughout the whole video with their trajectories plotted. The program is compliant with online fashion, which means that the rendered video will be played in real-time with a new window prompted when the program starts running. There are multiple extra multiple features implemented to enhance its functionalities, which are listed as follows:

  1. Self-implemented Harris corner detection algorithm for feature identifications.
  2. Self-implemented Lucas-Kanade algorithm for optic flow calculations.
  3. Self-defined feature cleaner including:
  • An adaptive threshold mechanism
  • A minimal distance discriminator to reject excessively dense features.
  • A K-means clustering for optimizing the selection of feature points.
  1. Image warping and iterative LK algorithm for higher accuracies.
  2. Pyramid Lucas-Kanade implementation for large movement robustness.
  3. Real-time video capturing and tracking

Methods

Harris Corner Detection

Lucas-Kanade Optic Flow

Iterative Lucas-Kanade

Pyramid ILK

  1. Build image pyramids for both the old frame and the new frame through down sampling.
  2. Down sampling the input features.
  3. Starting from the smallest level, use ILK to compute its optic flow.
  4. Up sample the features and the motion maps from the current level. Equation 7
  5. Use this as the initial input of the ILK of the subsequent level.
  6. Repeat 3 to 5 until the last level of the pyramid.

Results

Usage

For any version of the algorithm, the input video should be under the same folder of the source code. Please note that videos should be in 'mp4' or 'H264' format for the best performance. Then change line: cap = cv2.VideoCapture('demo7.mp4') into: cap = cv2.VideoCapture('your-input-video.mp4') Run the code.

For full report please visit here