hku-mars/IKFoM

Question about the observation model

unageek opened this issue ยท 5 comments

Could you help me understand the error-state Kalman filter on your paper? According to (33), the observation model (the likelihood) is defined as $p(๐ƒ_{k+1}^j ๐ฏ_{k+1} \mid ฮด๐ฑ_j)$. Why the left-hand side of the bar is not $๐ณ_{k+1}$ but $๐ƒ_{k+1}^j ๐ฏ_{k+1}$?

Yes, D_{k+1}^j v_{k+1} is the linearized noise of z_{k+1}.

Hmm... let me ask from a different perspective. The MAP estimation reduces to (37):

$$ฮด๐ฑ_j^o = \arg\min_{ฮด๐ฑ_j} โ€–๐ซ_{k+1}^j - ๐‡_{k+1}^j ฮด๐ฑ_jโ€–_{\bar{\mathcal R}_{k+1}}^2 + \text{(the prior term)}.$$

Intuitively, we want to find the optimal $ฮด๐ฑ_j$ that minimizes the magnitude of the residual $๐ซ_{k+1}^j$. So should not the first term just be $โ€–๐ซ_{k+1}^jโ€– _{\bar{\mathcal R} _{k+1}}^2$? Apparently, I am missing some point.

We want to find the optimal \delta x_j to minimize the magnitude of residual r_{k+1}, therefore, \delta x_j should appear in the MAP formulation, and the relationship of \delta x with the residual r_{k+1} should also be determined. The equation ||r_{k+1}||^2 with only r_{k+1} is meaningless.

OK, but then, the posterior distribution is $p(ฮด๐ฑ_j โˆฃ ๐ƒ_{k+1}^j ๐ฏ_{k+1})$ by Bayes' theorem, right? It does not make sense to me, since it does not contain the actual measurement $๐ณ_{k+1}$.

I have obtained (37) by maximizing the posterior probability

$$p(๐ฑ_{k+1}โˆฃ๐ณ_{k+1},โ€ฆ) = p(๐ณ_{k+1}โˆฃ๐ฑ_{k+1},โ€ฆ) p(๐ฑ_{k+1}โˆฃโ€ฆ)$$

with respect to $๐ฑ_{k+1}$. Let me close this issue. Thank you for your assistance!