jelfring/particle-filter-tutorial

Calculating likelihood is unused

Closed this issue · 2 comments

After a particle is propagated in the adaptive_particle_filter_kld update step, it's importance_weight is calculated:

https://github.com/jelfring/particle-filter-tutorial/blob/master/core/particle_filters/adaptive_particle_filter_kld.py#L87

and stored with the propagated sample, but then all of the weights are overwritten when they're normalized at the end of the update step. So the importance weight is calculated and stored and then overwritten before ever being used. Just trying to figure out if a step got missed or if I'm missing something.

The paper and this repo are awesome. I'm in the middle of translating this repo to C++ (mainly for my own edification) https://github.com/jasonbeach/particle-filter-tutorial-cpp still definitely a work in progress (I'll add a readme and point back to this repo and paper when it's more complete). Thanks for this though

After a particle is propagated in the adaptive_particle_filter_kld update step, it's importance_weight is calculated:

https://github.com/jelfring/particle-filter-tutorial/blob/master/core/particle_filters/adaptive_particle_filter_kld.py#L87

and stored with the propagated sample, but then all of the weights are overwritten when they're normalized at the end of the update step. So the importance weight is calculated and stored and then overwritten before ever being used. Just trying to figure out if a step got missed or if I'm missing something.

Imagine a simple example with three particles and imagine the weights that are stored in the step you are referring to are 0.3, 0.4 and 0.5. Then, the normalization step should change the three weights to 0.3/(0.3+0.4+0.5)=0.25, 0.4/(0.4+0.4+0.5)=0.33, 0.5/(0.3+0.4+0.5)=0.42, such that they sum up to one.

Normalization is not the same as setting uniform weights. In other words, normalization does not imply reinitializing all weights to the same value (0.33 in this example with three particles), which is why the weights you are referring to are used. Does that make sense?

In case the code does not work as just described, I might have overlooked a bug. Please let me know in that case.

The paper and this repo are awesome. I'm in the middle of translating this repo to C++ (mainly for my own edification) https://github.com/jasonbeach/particle-filter-tutorial-cpp still definitely a work in progress (I'll add a readme and point back to this repo and paper when it's more complete). Thanks for this though

Thanks and good luck with your code!

Omigosh--brain fart... yes that makes perfect sense....just like normalizing a vector. Thanks for your response.