Extended Kalman Filter

Course Note SLAM (by Cyrill Stachniss)

Jihong Ju on October 4, 2018

Bayes Filter

SLAM is a state estimation problem, it estimates the map and the robot pose. Bayes filter is a tool for state estimation.

Recap: Bayes filter is a recursive filter with

  • Prediction step
  • Correction step

Kalman Filter

  • Bayes Filter
  • Estimator for the linear Gaussian case
  • Optimal solution for linear models and Gaussian distributions

Distribution and properties

Everything is Gaussian:

Properties: Marginalization and Conditioning

Given

The marginals are Gaussians

The conditions are also Gaussians

Linear Models

  • Linear transition and observation model
  • Zero mean Gaussian noise

$ A_t $: Matrix $ (n \times n) $ describes how the state evolves from $ t-1 $ to $ t $ without controls or noise.

$ B_t $: Matrix $ (n \times l) $ describes how the control $ u_t $ changes the state from $ t-1 $ to $ t $.

$ C_t $: Matrix $ (k \times n) $ describes how to map the state $ x_t $ to an observation $ z_t $.

$ \epsilon_t $, $ \delta_t $ are independent random variables representing the noise with zero mean and covariance $ R_t $ and $ Q_t $ respectively.

Linear Motion Model

Linear Observation model

Everything stays Gaussian

Given an initial Gaussian belief, the belief is always Gaussian

  • Prediction step
  • Correction step

Only linear operations involved. Apply linear operations to a Gaussian Process results in a Gaussian Process.

(Proof see Probabilistic Robotics, Sec. 3.2.4)

Kalman Filter Algorithm

Given $ \mu_{t-1}, \Sigma_{t-1}, u_t, z_t $

Prediction step

Correction step

Return $ \mu_t, \Sigma_t $

  • Production of Gaussians is intuitively weighted mean, the correction is thus essentially a weighted mean of prediction and measurement. The weights are dependent of the measurement noise $Q_t$
  • If $Q_t \rightarrow \infty$, then Kalman gain $K_t \rightarrow 0$, $\mu_t \rightarrow \bar{\mu_t}$, $\Sigma_t \rightarrow \bar{\Sigma_t}$, No correction to the prediction based on the measurement.
  • If $Q_t \rightarrow 0$, then Kalman gain $K_t \rightarrow C_t^{-1}$, $\mu_t \rightarrow C_t^{-1} z_t$, $\Sigma_t \rightarrow 0$, The measurement overrides prediction completely.

Extended Kalman Filter

Extend the Kalman Filter with first order taylor expansion.

Non-linear Dynamic Systems

Most real problems have nonlinear functions

linear-func nonlinear-func
  • If apply linear function on Gaussians, the result is also Gaussian. Kalman filter is built based on this.
  • If apply non-linear function on Gaussians, the result is not Gaussian. Kalman filter is not applicable anymore.

To resolve this: Local linearization.

EKF Linearization

With first-order Taylor expension,

Prediction:

We denote the Jacobian Matix as $ G_t $.

Correction:

We denote the Jacobian Matix as $ H_t $.

Remind that Jacobian can be seen as the tangent plane of the vector-value functions landscape.

linearized-function

Linearized motion

where is the linearized motion model,

$ R_t $ is the noise of the motion.

Linearized observation

where is the linearized observation model,

$ Q_t $ describes the measurement noise.

Extended Kalman Filter Algorithm

Given $ \mu_{t-1}, \Sigma_{t-1}, u_t, z_t $

Prediction step:

Correction step:

Return $ \mu_t, \Sigma_t $

KF vs. EKF

$ K_t, H_t $ need to be re-computed at each step.

  • Works well in practice with moderate non-linearities
  • Larget uncertainty leads to increased approximation errror error
  • Complexity: $ O(k^{2.4} + n^2) $ where k is the number of parameters to estimate and n is the size of the observation vector

Literature

  • PR Chapter 3
  • Schon and Lindsten “Manipulating the Multivariate Gaussian Density”
  • Welch and Bishop “Kalman Filter Tutorial”