/EIBA

Efficient Incremental BA

Primary LanguageC++Apache License 2.0Apache-2.0

Efficient Incremental BA

This source code provides the efficient incremental bundle adjustment implementation, which is part of our RKD-SLAM.

1. Introduction

We present RKD-SLAM, a robust keyframe-based dense SLAM approach for an RGB-D camera that can robustly handle fast motion and dense loop closure, run without time limitation in a moderate size scene. For reducing accumulation error, we also introduce a very efficient incremental bundle adjustment (EIBA) algorithm, that can provide nearly the same solution as global BA, but with significantly less computation time, which is proportional to how many variables are actually changed.

2. Related Publications

Haomin Liu, Chen Li, Guojun Chen, Guofeng Zhang, Michael Kaess and Hujun Bao. Robust Keyframe-based Dense SLAM with an RGB-D Camera [J]. arXiv preprint arXiv:1711.05166, 2017. [arXiv report].

3. License

EIBA is released under a Apache License 2.0. Please contact Guofeng Zhang if you have any questions.

If you use this source code for your academic publication, please cite our paper:

@article{LiuLCZKB2017,
  title={Robust Keyframe-based Dense SLAM with an RGB-D Camera},
  author={Haomin Liu and Chen Li and Guojun Chen and Guofeng Zhang and Michael Kaess and Hujun Bao},
  journal={arXiv preprint arXiv:1711.05166},
  year={2017}
}

4. Installation

Dependencies

git clone https://github.com/jbeder/yaml-cpp.git
cd yaml-cpp
mkdir build && cd build
cmake ..
make -j4
sudo make install

Environment

The project has been tested in ubuntu 16.04.

You can build and run example:

cd /path/to/this/project
mkdir build && cd build
cmake ..
make -j4
./ExamlpeYAML ../Data/

5. Usage

We use inverse depth parameterization for 3D points, which means that all the feature points are parameterized in the form of inverse depth and the corresponding camera location from which they were first observed. The first observations are also called source features. In our program, we only optimize camera pose and source feature inverse depth.

We provide an interface class BAInterface, which can be used to push feature measurements PushKeyframeFeatures(*)and frame constraintsPushBetweenFrameConstraint(), then you can call Optimize(*) to run optimization and get optimized camera pose and inverse depths.

Besides, we also provide a function PushFrameInfoFromYAML(*), which can be used to load our example YAML format data files and push data to BA system.

PushKeyframeFeatures(*)

  • Call this to push feature observations
    • Initial guess of current frame camera pose (rotation and position)
    • Initial guess of last frame source feature's inverse depth
    • Current frame source features (2D feature measurements )
    • Measured features (matches) and covariance
  • The correspondence for measured features to the matched source features is indicated by matched keyframe index (globally) and matched source feature index (locally in its keyframe).

PushBetweenFrameConstraint(*)

  • Call this to push frame constrains

    • two indices of frame constraint

    • relative pose of frame constraintand covariance

  • The function minimize the energy of

Optimize(optKFs)

  • Call this to run EIBA optimization
  • Optimized result would been stored in optKFs
    • Optimized keyframes and depths
    • if keyframes or depths have been updated by EIBA, the corresponding boolean mark would be set true.

SetParams(*)

  • Call this to set optimization parameters
  • Please refer to the code comments for more informations.

Notes

  • We use rotation and postion model for input and output camera pose, that means , where) is the point in world frame, ) is the point in camera frame.
  • All the 2D features should have been undistorted and normalized to z=1 plane.
  • If measured depth is invalid, set the corrresponding inverse depth source observationsource_features[i].m_inv_depth to 0.

Example

We provide an example ExampleYAML.cpp to call EIBA by providing YAML format frame data, which is generated by the tracking part of RKD-SLAM. This example data is recorded from TUM RGBD fr3/long_office_household, which contains 93 keyframes.

Here is a simplified example of YAML data format:

# keyframe feature measurement
features:
  # initial guess
  initial_guess:
    # current frame camera pose (rotation and position)
    current_camera_pose_guess:
      - [1, 0, 0, 0]
      - [0, 1, 0, 0]
      - [0, 0, 1, 0]
    # last frame inverse depth guess
    last_inv_depth_guess:
      - [0.5]
  # source features (first observation of features)
  # point2D (normalized in z=1 plane) and inverse depth
  source_features:
    - [0.2, -0.08, 0.3]
  # measured feature matches
  # matched source feature index is indicated by its keyframe index, and source feature index
  # kf_idx: matched keyframe index (globally)
  # ftr_idx: matched source feature index (locally)
  # pt: point2D (normalized in z=1 plane) and inverse depth
  # cov2: covariance
  measured_features:
    - kf_idx: 0
      ftr_idx: 15
      pt: [0.33, -0.42, 0.2]
      cov2:
        - [280000, 600]
        - [600, 280000]
# between frame constraints
# see README.md for more information
# keyframe_index_1/2: two indices of frame constraint
# relative_pose_Rp: T_{12}
# cov6: covariance
frame_constraints:
  - keyframe_index_1: 0
    keyframe_index_2: 1
    relative_pose_Rp:
      - [1, 0, 0, 0]
      - [0, 1, 0, 0]
      - [0, 0, 1, 0]
    cov6:
      - [3000, 0, 0, 0, 0, 0]
      - [0, 3000, 0, 0, 0, 0]
      - [0, 0, 3000, 0, 0, 0]
      - [0, 0, 0, 100, 0, 0]
      - [0, 0, 0, 0, 100, 0]
      - [0, 0, 0, 0, 0, 100]