Bayes Filter is a form of State Estimator.

The idea of the state estimator is that given known efforts $u$ and observations $z$ , we want to produce the best estimation we have of the state $x$ — this estimation is represented by a probability distribution known as the belief state $bel(x)$

Or put more simply,

Input: Observations/Measurements ($z$)

Action ($u$) - a force/torque applied by the agent to act on the environment

Output: Belief ($bel(x)$) - probability distribution of the estimated true state

Belief state

aka Posterior/Information state/State of knowledge

Our belief is a probability over all possible states, and the probabilistic representations shows the stochasticity/noise of actions and sensors.

Ideally, given the true state $x^,$ $bel(x^) = 1$.

To determine the belief state, we condition it on all of our previous history of observations $z$ and efforts $u$. Or more formally,

<aside> 👉 $bel(x_t) = P(x_t|z_{1:t}, u_{1:t}, x_{1:t-1})$

</aside>

Screenshot 2024-04-03 at 10.14.07 AM.png

Except that this is intractable, esp/ in high-dimensional systems where $z_t, u_t\in \R^n, n > 1$, or when $t \rightarrow \infin$ .

Enter the Markov Assumption,

<aside> 💡 Future state $x_t$ is conditionally independent of past actions $u_{t-1}$, measurements $z_{t-1}$ given present state $x_{t-1}$!

</aside>

From conditional independence, we know that