/SFND_Lidar_Obstacle_Detection

SFND_Lidar_Obstacle_Detection

Primary LanguageMakefile

Sensor Fusion Self-Driving Car Course

For more details , please check my blog https://blog.csdn.net/williamhyin/article/details/105159842 .

Welcome to the Sensor Fusion course for self-driving cars.

In this course we will be talking about sensor fusion, whch is the process of taking data from multiple sensors and combining it to give us a better understanding of the world around us. we will mostly be focusing on two sensors, lidar, and radar. By the end we will be fusing the data from these two sensors to track multiple cars on the road, estimating their positions and speed.

Lidar sensing gives us high resolution data by sending out thousands of laser signals. These lasers bounce off objects, returning to the sensor where we can then determine how far away objects are by timing how long it takes for the signal to return. Also we can tell a little bit about the object that was hit by measuring the intesity of the returned signal. Each laser ray is in the infrared spectrum, and is sent out at many different angles, usually in a 360 degree range. While lidar sensors gives us very high accurate models for the world around us in 3D, they are currently very expensive, upwards of $60,000 for a standard unit.

Radar data is typically very sparse and in a limited range, however it can directly tell us how fast an object is moving in a certain direction. This ability makes radars a very pratical sensor for doing things like cruise control where its important to know how fast the car infront of you is traveling. Radar sensors are also very affordable and common now of days in newer cars.

Sensor Fusion by combing lidar's high resoultion imaging with radar's ability to measure velocity of objects we can get a better understanding of the sorrounding environment than we could using one of the sensors alone.

Demo

Folder structure

  • README.md

  • media - demo image for readme file

  • ./src/

    • enviorment.cpp - main function

    • processPointClouds.h&cpp - Point-Cloud processing functions, which include filtering, segmentation, clustring, drawing bouding box ...

    • ransac3d.h&cpp - RANSAC-based segmentation of obstacles and plane

    • cluster3d.h&cpp&kdtree3d.h - KD-Tree based clustering of obstacles

    • /quiz/ - testing quiz functions for ransac and clustering

    • /render/ - rendering function for Point-Cloud

    • /Sensors/ - real word Point-Cloud data

Installation

Ubuntu

$> sudo apt install libpcl-dev
$> cd ~
$> git clone https://github.com/udacity/SFND_Lidar_Obstacle_Detection.git
$> cd SFND_Lidar_Obstacle_Detection
$> mkdir build && cd build
$> cmake ..
$> make
$> ./environment

Windows

http://www.pointclouds.org/downloads/windows.html

MAC

Install via Homebrew

  1. install homebrew
  2. update homebrew
    $> brew update
  3. add homebrew science tap
    $> brew tap brewsci/science
  4. view pcl install options
    $> brew options pcl
  5. install PCL
    $> brew install pcl

Prebuilt Binaries via Universal Installer

http://www.pointclouds.org/downloads/macosx.html
NOTE: very old version

Build from Source

PCL Source Github

PCL Mac Compilation Docs