/34759-Perception-Exercises

Exercises for the course Perception for Autonomous Systems with a final project to perform detection and tracking for autonomous driving.

Primary LanguageJupyter Notebook

34759-Perception-Exercises

This is an implementation of the weekly exercises for the DTU course Perception for Autonomous Systems. The scripts can be used during the exam.

The course covered the following topics:

  • Describe the steps that lead to 3D reconstruction using multiple views.
  • Define commonly used image feature extraction and matching techniques.
  • Discuss characteristics of various ranging sensors and techniques.
  • Apply software tools to process 3D point clouds.
  • Combine visual and 3D sensory input with state estimation techniques.
  • Describe the differences between classical and learning-based object/scene classification techniques.
  • Describe the different steps in visual odometry and explain the operation of the related algorithms.
  • Combine the taught material to propose and describe possible implementations of further perception applications

At the end of the course a project to detect and track pedestrians, cars and cyclists in a video sequence was implemented: This project aims to emulate three dynamic scenarios based on autonomous driving, which, in industry, should be carried out by an automated system to carry out high-precision detection, tracking, classification and prediction of different categories of objects, with the support of trained machine learning models. We need to implement 3D detection and tracking on pedestrians, cyclists, and cars in the photos captured by the stereo camera fixed on the vehicle (even when occlusion occurs in the images) and classify the objects into corresponding categories on the provided dataset. The outcome of the project can be seen here.