*Read the complete article on SLAM* (but there is nothing worth in it yet)

SLAM is a term that groups all algorithms aiming at map creation and location considering noisy sensors and effectors (in real life, for a robot, moving and sensing *is* very noisy). All SLAM mapping algorithms follow the same pattern:

- move to a point of interest for a further exploration of the environment
- use sensors (either during the move, or once arrived to destination) to capture features of the environment
- extract features from sensors information (features are called
*landmarks*) - project on the map the move that has (theoretically) been made
- match the landmarks from previous runs with landmarks extracted from step 3; this can give hints on the errors made during the move or during the sensor scanning
- update the position, update the map (possibly by adding new landmarks found in step 3), update the probability associated to position and landmarks
- do all the steps again

In simple words, SLAM assumes that current position and map is probably good enough for corrections on new moves, given sensor information. The map is built when landmarks are added to it. The position is kept up-to-date during the process. Considering the “location only” problem, the same algorithm can be used (without map update). This latter problem is known as “kidnapping” ie. the robot is taken to an unknown location of a known map, and must find where it is located.

SLAM algorithms are mostly based on EKF (extended Kalman filters), or PF (particle filter). HBot's SLAM is using particle filter. You can see on the right side an example of mapping from a particle filter SLAM.

A bitmap can be vectorized to simplify its use. Many algorithms exist, they will not be developped here. The main drawback of vectorization is the possibly big complexity of the resulting polygons.

The Douglas-Peucker algorithm simplifies a polygon by removing vertices that are not too far from the global shape of the polygon. “Not too far” is defined by a parameter (commonly named epsilon). This algorithm is defined recursively, yet its implementation in hbot is not recursive, for performance and stack size purposes. There are several version of the algorithm, the most known has a complexity of O(n²) (n is the number of input vertices).

References:

- a description of Douglas-Peucker algorithm
- a paper on a faster version in O(n*log(n)) (and the related implementation which inspired hbot's version).
- a paper on a non-intersection version of DP algorithm (PDF). This is important because some of the operations made on simplified maps of hbot do not tolerate self-intersecting polygons (e.g. triangulation).

A triangulation (in this context) is the process of splitting an area into a set of triangles. Constraints are added, in the form of a polygon that must be part of the triangle edges. Finally, the Delaunay method maximizes the angles of the triangle, to avoid “almost-flat” triangles. Given a polygon, the result is a set of triangles inside this polygon, the triangles being as “opened” as possible. This is very useful for splitting a map, for example, and use the triangle partition as a tree describing the possible paths over the map. Such a structure is easier to handle than a bitmap for some pathfinder algorithms (e.g. A*).

References:

- a description of Delaunay triangulation

**
Pathfinding in triangulated maps**

Triangulation of a vectorized map can be interpreted as a undirected graph: triangles are the nodes of the graph, contact between two triangles means an edge between the two nodes. This way, a graph can easily be constructed, and will represent the clear part of the map. Then, a shortest-path algorithm can be run on the graph, and the list of triangle centers is the path to be followed by the robot to join its goal without obstacles.

PID stands for proportional–integral–derivative, and a PID controller is a way to control analogic devices (e.g. a motor) given feedback (e.g. an odometer) in a closed-loop fashion. It is used when a command on a device is not 100% sure to reach a goal: it is then needed to have a feedback, check the error from envisionned result, and correct the command. In case of PID, the correction is a weighted combination of error itself, accumulated error, and error evolution.

References:

- Wikipedia: PID_controller
- PID without a PhD is an excellent introduction to the principle of PID controlling and to the realization of a software version.