AMCL reverse engineering

This is my notes from reverse engineering AMCL. I did this to get a better understanding of the state of the art localisation when implementing my own Monte Carlo Localisation as a course works project during my masters in robotics degree.

Hopefully, it will be useful to someone else, but remember that you learn by doing it yourself than by just reading about it.

Note that this is an analysis of what AMCL does differently than the book [@thrunProbabilisticRobotics2005], and thus it assumes the reader understands the basic concept.

Odometry

There are two interesting extras here:

  1. If \(\delta_{trans}\) is too small (<0.01 m), set \(\delta_{rot1}\) to 0. This avoids some numerical instability when rotating in place.
  2. It has a rather neat check for driving backwards, taking the minimum of \(\delta_{rotN}\) and \(\pi-\delta_{rotN}\) as the basis for the variances. This prevents severe overestimating of the noise when reversing.

Both of these features have been implemented.

Map (from likelihood model perspective)

For the map AMCL stores two attributes per cell:

  • Occupancy state (occupied, free, unknown)
  • Distance to the nearest occupied cell.

This means AMCL does not use the full probability information in the original evidence grid.

The distance to nearest occupied cell is computed via a something similar to Dijkstra’s algorithm, flood filling outwards from the occupied cells, up to a limited max distance. The distance computed appears to be the L2 norm however.

Side note: This is computed in a really smart way, pre-computing a lookup table for one quadrant of the grid (up to the limited max distance). This can be found in CachedDistanceMap in map_cspace.cpp.

Side side note: However, it could be optimised further since currently it uses double-pointer indirection (double**) as opposed to linearising the array and doing it with a multiplication followed by addition. This is a micro-optimisation however.

Sensor model: Likelihood field

AMCL has both likelihood models (two of them) and a beam model, but the focus for this analysis is exclusively on the likelihood model.

AMCL uses a limited number of beams (30 by default), spaced equidistantly in the sensor code.

Because of the difference in the map it is non-trivial to work out if I’m doing the same thing as AMCL for computing the probability, but here is what AMCL does in Python-esque pseudo code (“de-optimised”):

for particle in cloud:
    pose = particle.pose + laser_offset
    p = 0
    for beam in subset(scan, max_beams):
        if not sanity_check(beam):
            # check beam for max range, NaN etc
            continue
        endpoint = trigonometry(beam, pose)
        map_coords = math(endpoint)
        if map_coords is outside_map:
            z = max_occ_dist  # Max distance we precomputed distances for
        else:
            z = map_data[map_coords].occ_dist
        pz = z_hit * exp(- (z ** 2) / (2 * sigma_hit ** 2))
        pz += z_rand / range_max
        # Here is a helpful assert to ensure pz is in [0, 1]
        p += pz ** 3
    particle.weight = p

All the time, the total weight is tracked as well and everything is normalised to [0, 1] at the end (assuming non-zero weights).

From this, it can be seen that AMCL does not use the algorithm in the book, but rather adds probabilities, but in a non-linear way (cubing them). The cubing is documented as “ad-hoc” but “works well”. In the range of interest [0, 1] this has the effect of reducing the values. However, this reduction is non-linear and small values will be reduced more.

In addition to this algorithm, there is an extended likelihood field algorithm with something called beam skipping (not enabled by default), which according to the documentation is supposed to help with unexpected obstacles. If enabled, this part of the algorithm is still only active after the particle set has converged.

In a strange oversight, AMCL does not precompute the probability, only the distance to the nearest obstacles. It would have been quite possible to precompute the probability on that map. Maybe this is because AMCL has some sort of support for dynamic parameter change, which would break this if sigma_hit would change without getting a new map?

Particle filter

As for the particle filter itself, its design is informed by the choice of resampling procedure (KLD).

The filter has a min and max number of particles, as well as various particles related to KLD.

Convergence

There is the concept of convergence of the filter. This is computed by looking at if all particles are within a certain threshold of the mean position. Only x and y are considered for this, \(\theta\) is not used. This seems to be used exclusively for the likelihood field with beam skipping model.

Action & sensor update

The action update is straight-forward, trivial even, just processing all the particles with the odometry model in use.

The sensor update has a couple of extra steps however after the sensor model has executed. Simplified pseudo-python:

if total_weight > 0:
    normalise_weights()
    w_slow = low_pass(w_slow, avg_weight, alpha_slow)
    w_fast = low_pass(w_fast, avg_weight, alpha_fast)
else:
    set_all_weights(1/particle_count)

As a side note, in this code there is both a very stupid thing and a very smart thing:

  • It recomputes the total to compute avg_weight, even though it already has the total (stupid).
  • The low pass filter is implemented as x = x + alpha * (x' - x) instead of the more common expression x = (1-alpha) * x + alpha * y. This is one less arithmetic operation (smart).

Resampling

AMCL uses adaptive KLD resampling. According to comments, this is incompatible with low variance sampling, so that isn’t used.

By default, the adaptive sampling is not enabled (both alpha parameters are set to 0), but enabling it does seem to improve localisation.

Pseudo code for what it does:

def resample(self):
    # Switch between two particle sets
    set_a = self.current_set
    set_b = self.next_set
    # Cumulative probability table (huh?)
    c = array(double, set_a.sample_count + 1)
    c[0] = 0
    c[1:] = running_total([a.weight for a in set_a])

    # KD-tree for adaptive sampling (huh?)
    set_b.kdtree.clear()

    total = 0
    set_b.sample_count = 0
    w_diff = 1 - self.w_fast / self.w_slow
    if w_diff < 0:
        w_diff = 0

    while set_b.sample_count < self.max_samples:
        sample_b = set_b.samples[set_b.sample_count++]
        if uniform_rand(0, 1) < w_diff:
            # See amcl_node.cpp uniformPoseGenerator, basically generates random
            # poses inside known free spaces
            sample_b.pose = generate_random_pose()
        else:
            # "Naive discrete event sampler"
            r = uniform_rand(0, 1)
            for i in range(0, set_a.sample_count):
                if c[i] <= r < c[i+1]:
                    break
            sample_b.pose = set_a.samples[i]
        sample_b.weight = 1.0  # Surely this will be overwritten soon anyway?
        total += sample_b.weight
        set_b.kdtree.insert(sample_b.pose, sample_b.weight)
        # KLD sample check:
        if set_b.sample_count > self.resample_limit(set_b.kdtree.leaf_count):
            break
    # Reset low pass filters, to not make it too random
    if w_diff > 0:
        self.w_slow = self.w_fast = 0
    normalise_weights()
    # Compute cluster statistics
    self.cluster_statistics(set_b)
    swap_particle_sets()
    # Only used for beamskip model
    self.update_convergence()

The KLD criteria seem to be slightly different than what the book has, need to look further into the KD-tree implementation used here.

The function responsible for the KLD criterion has the following pseudo-code:

def resample_limit(self, k):
    # Apparently taken directly from "Fox et al."
    # Appears to be code on page 264, table 8.4, lines 15-16
    if k <= 1:
        return self.max_samples
    b = 2/(9*(k-1))
    c = sqrt(b) * self.pop_z
    x = 1 - b + c
    n = ceil((k-1)/(2*self.pop_err) * x ** 3)

    # Limit to [min, max]
    if n < self.min_samples:
        return self.min_samples
    if n > self.max_samples:
        return self.max_samples
    return n

The KD-tree is interesting in that it inserts into a discrete grid, tracking the total weight for each cell (used for cluster handling later on).

The grid size is 0.5 meters in x/y and 10 degrees in \(\theta\) (hardcoded).

Clusters

AMCL identifies clusters in the sample sets, this appears to be used for the actual localisation to select the best candidate as well as compute the covariance of the filter.

The clustering is quite simple, using the coarse KD-tree from the KLD buckets.

  1. Put all leaves in a queue.
  2. For each item in the queue:
  3. Check if it is already in a cluster, if so skip to the next node
  4. Assign a cluster ID to the node (using a counter).
  5. Check all neighbour buckets (\(3^3\)) to see if they are non-empty, and if so assign them to the same cluster. Do this recursively. There is a slight bug here however, in that the algorithm doesn’t handle warparound for the neighbours in \(\theta\).

Then the code processes the clusters:

  1. Reset the statistics (mean, covariance, summed weight, particle count) for the clusters
  2. Reset overall filter statistics (as above)
  3. For each particle:
  4. Get the cluster ID
  5. Update statistics for both the total filter and each cluster
  6. For each cluster normalise the statistics: Divide by number of particles to get mean, compute final covariance and so on.
  7. Same for overall filter statistics

Later on, it will use the cluster with the highest total weight.

Overall node

For the ROS node itself, there appears to be multi-threading going on, since there is a mutex being used. However, this seems to only be protecting the configuration parameters. May be related to the support for dynamic reconfiguration. However, some other ROS packages do not use mutexes to protect dynamic reconfiguration.

The laserReceived callback

The function of most interest would be laserReceived, where the actual localisation code is executed.

There appears to be some sort of support for having multiple lasers as well as caching of transforms related to this.

There is a flag, pf_init to special-case the first iteration

A very high level overview of the function is (with multi-laser code removed):

def laser_received(self, scan):
    if not has_map():
        return
    get_laser_transform()
    pose = get_odometry(scan.header.stamp)
    force_publication = False
    if self.pf_init:
        force_publication = True
        pose_delta = pose - self.prev_pose
        self.do_filter_update = pose_delta > threshold
    if not self.pf_init:
        self.prev_pose = pose
        self.pf_init = True
        self.do_filter_update = True
        self.resample_count = True
    elif self.pf_init and self.do_filter_update:
        run_odometry_model()

    resampled = False
    if self.do_filter_update:
        run_sensor__model()
        self.do_filter_update = False
        self.prev_pose = pose
        if ++self.resample_count % self.resample_interval:
            resample()
            resampled = True
        publish_cloud()
    if resampled or force_publication:
        # Compute single position hypothesis
        max_weight = 0
        for cluster in clusters:
            if cluster.weight > max_weight:
                best_cluster = cluster
                max_weight = cluster.weight
        estimated_pose = pose(best_cluster.mean, filter.covariance)
        publish_pose(estimated_pose)
        publish_transform(estimated_pose - odom_transform)
    else:
        republish_last_transform()

From this analysis it appears there is nothing particularly unexpected, but the code is complicated quite a bit by handling multiple lasers, lasers mounted upside down, and many other things.

AMCL does resampling unconditionally every resample_interval filter updates, by default 2.

AMCL bugs

I found some bugs in AMCL:

  1. Clustering doesn’t handle wrap-around from \(-\pi\) to \(\pi\) (or vice versa). (AMCL bug)
  2. Clustering is broken for cluster ID 0, since it uses 0 both for “not yet assigned” and “first cluster”. It should, however, work out in the end since that first cluster will just be written to twice, getting ID 1 (assuming it consists of more than one bucket).
  3. Incorrect computation of circular variance, it is actually computing circular standard deviation. (AMCL bug)
    Update (2019-04-14): Turns out this was not actually a bug. The two expressions are in fact equivalent.

Summary

There is a lot of smart code in AMCL, but also quite a lot of stupid code, and it is all pretty poorly documented. Parts are written in C and parts in C++. Not unexpected of code with its roots in research that many people have worked on over the years.

I also found some bugs, mostly with minor impacts. In addition to the bugs, several places use sub-optimal algorithms (as demonstrated above) or compute the same value more than once unnecessarily.

Oh and by the way, I put up my own MCL implementation publicly, called QuickMCL. It lacks some features that AMCL has but uses less computational resources.