KumarRobotics/msckf_vio

The question about the feature initialization

Closed this issue · 3 comments

Hi. I have tested your msckf work and it works perfectly on the euroc dataset. Your algorithm seems to initialize the features those that are lost tracking.

The question is that what if the features are perfectly tracked in the scene for long time? Then there seems to be no way to compensate IMU until the one of the features is being lost? (Like in the case where you are statinary and the features in the scene are 100% tracked so that there are no initialization which leads IMU divergence.)

If I'm misunderstanding your code, may I ask for the explanation?

Thank you for reading!

There are two situations that feature measurements are used to update the state,

  1. Like what you said, all measurements of a feature are used when it loses track.
  2. When a camera state needs to be marginalized, all measurements obtained at this camera state are used to update the state.

Since the camera states are marginalized at a constant rate independent of the motion, the state will always be updated with feature measurements.

Oh! Thank you for the reply!! Thanks!
For more understanding I have an additional question!

Is there any reason you designed the algorithm seperately? (feature extraction and msckf)

I think you intended to do this for the parallel computation, but since msckf should wait for the next features to be arrived, the algorithm seems to be running in series.

I'm sure there is some reason why you designed this way. I appriciate for your response! thanks!

P.S. What will happen if the feature extractor and the msckf run in a single callback?

You are correct. Running as two different ROS nodes make them execute in parallel. One of the advantage is the delay is hided by overlapping the execution of feature detection and filtering. When the filter is processing feature measurements from time t, the feature detection is detecting features for time t+1.