/SLAM-2D-LIDAR-SCAN

Excellent FastSLAM with 2D Laser Scan Match in Python Environment

Primary LanguagePython

ROBOTICS_2DSCAN_SLAM

Demo

Click below to play the demo video:

IMAGE ALT TEXT

Usage

Run Occupancy Grid on ground truth data:

python Utils/OccupancyGrid.py 

Run Scan Matching algorithm alone on raw data.:

python Utils/ScanMatcher_OGBased.py 

Run FastSLAM algorithm on raw data.:

python Algorithm/FastSlam.py 

Dataset

The dataset is collected by Dirk Hähnel[1]. The robot platform is equipped with a 180deg FOV 2D lidar and a wheel odometry. There are 910 readings total and each reading contains robot’s x, y and theta(orientation) from odometry and 180 range-bearing reading spanning from -90deg to +90 deg. Both ground truth and raw data are provided with matching timestamp for each data point collected. Fig.1 below shows the plot of both ground truth and raw data respectively. The big red dot is where robot started and each little dot indicates a location where a datapoint is collected. The color of the dots changes from red to purple as robot moving along its trajectory. The robot circled around four laps and ended where the big purple dot is.


Fig.1 Ground Truth (Left) And Raw Odometry Data (Right)

Problem Statement

Each sensor carries a certain amount of uncertainty. For the 2D lidar, each reading contains both angular info and distance measurement. For example, if we receive a point measurement at 5deg and 3.2m, the actual position could be at 5.1deg and 3.3m. On the other hand, each robot movement contains uncertainty as well, and that affects both robot orientation and moving distance. For example, if the odometry registered that robot moved 1m straight forward, it might move only 0.95m forward and 0.05m left. This error is accumulative, as it increases as robots moves.

Based on the above reasoning, we can model this problem as a special type of Partially Observable Markov Decision Process (POMDP). At time t, let xt be the actual pose of robot, zt be its sensor reading and ut be the moving command, and we denote ‘m’ be the actual world map. SLAM problem can be represented as POMDP shown in Fig 2.


Fig.2 POMDP Representation of SLAM

With u1:t and z1:t observed at each time step, our goal is to derive the probability distribution of
, which can be decomposed by Rao-Blackwellized factorization into .

The first term is the robot’s trajectory and the second term is simply the mapping problem given known robot trajectory. The key for SLAM is to solve the first term.

Occupancy grid

The map of the environment can be represented as occupancy grid. Each cell represents a 2D location (x, y) and it also holds a tuple (a + 1, b + 1) value which keeps track of the laser scanning result, where a is the number of times it has been detected as occupied and b is the number of times that it has been detected as empty. The tuple value here represents a Direchlet distribution with (1, 1) as initial value, which means it is equally likely to be empty or occupied as we have no prior info. With robots moving around, every time a cell is covered under laser scanning radius, its tuple (a, b) value will be updated accordingly. As illustrated by Fig. 3[3], each ray covers a fan area between adjacent rays, and all cells within scan area have a closest ray assigned. If the cell to robot distance is less than (detected range – α/2), it is regarded as empty and if the distance is within (detected range± α/2), it is regarded as occupied. Otherwise, its tuple value will not be updated.


Fig. 3 Occupancy Grid Ray Tracing Model

The occupancy grid updates every time a new data is collected. The final map has value (a+1)/(a+b+2) in each cell and we set 0.5 as threshold: grid with value above 0.5 is regarded as occupied and is empty otherwise.

Scan Matching

As we can see from Fig. 1 that the odometry data from raw reading drifted and cannot be directly used. However, laser scan usually gives a much more accurate reading and its accuracy is within 5cm. Therefore, it is more accurate to obtain the robot’s pose via laser reading.

We applied multi-resolution strategy[2], which is to first downsample the space into 10x more coarse grid and find the best match. That approximately speeds up by 100x. And then we do a more refined search in a much smaller space around the calculated optimal location in coarse scale.

Fig.4 illustrates the idea of multi-resolution search. The left image shows the optimal search result under the coarse grid, where yellow cells are scan results of existing map and the red dots are the new laser reading. The coarse search locates the scan result as best as it can under that resolution and then a more refined search as shown on the right side fine tunes its final location to precisely align the two readings.


Fig.4 Scan Matching of Coarse Grid (Left) and Fine Grid (Right)

FastSLAM[3]:

We approximate the robot’s pose distribution by particles. In other words, at each timestep, the robot’s pose is not a single value but a belief and the belief is updated at each timestep by particle filter. As we know that the SLAM problem can be decomposed as:

where the first term is localization and once we have the trajectory from localization, the second term is a simple mapping problem. Therefore, the key of SLAM is to solve the robot’s trajectory x0:t given observation z1:t and moving command u1:t. We can apply Bayes’ rule to further break down the first term as:

It is hard to directly sample from (2) and instead we can use importance sampling. Our proposal distribution is:

which is a combination of both scan matching and motion model and the denominator is just a normalizer. As we use scan matching method in [2], the integration in the denominator is simply the sum of the results of the entire searching area. The weight of importance sample is therefore:

where wt-1 is the weight of particles of last time step. Given proposal distribution and the corresponding weight, we can perform particle filtering to update robot’s pose belief given new moving command and Lidar observation. Fig.5 is the result we obtained using 15 particles. We can clearly see that the algorithm successfully produces a globally consistent map and it gives a correct loop closure result when the robot revists the pre-visited area.


Fig.5 SLAM Result Using FastSLAM

Reference

[1]. http://ais.informatik.uni-freiburg.de/slamevaluation/datasets.php
[2]. Edwin B. Olson, Real-time correlative scan matching, 2009 IEEE International Conference on Robotics and Automation
[3]. https://www.coursera.org/lecture/motion-planning-self-driving-cars/lesson-2-populating-occupancy-grids-from-lidar-scan-data-part-2-VcH67 [4]. M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem,” Proc. AAAI Nat’l Conf. Artificial Intelligence, 2002.