/SuperPose

Robust 6D-tracker

Primary LanguageC++

SuperPose (SuperTrack)

The idea of this project is to combine pure RGB-based pose estimator OnePose with pure geoemetry based method ICG as well as the robust 2D Tracker.

The system is divided into two module:

  1. Scanning module: The input is a set of RGBD sequence or a RGB-CAD model. And the output is a Sparse Feature bindled with a Sparse View Model. We also provide visualizer for this phase.

  2. The tracking model is using the Model comes from the previous step. RTS is used to provide a preliminary mask. And OnePose is used to generate a starting Pose. ICG to used update and conduct the track.

The demo version is going to connected using python. If everything is good, will consider to connect using C++ & CUDA to accelerate more.

Plan

Feb.1st to Feb.4th

  • Align the MeshModel with the SparseModel.
  • Check How OnePose is working on KF frames. Is it still able to provide proper Estimation. (It is performing OK...)
  • Finish the Scan phase. Generate model from Video Sequence, existing CAD Model or NeRF model.

Feb.5th to Feb.11th

  • Test ICG algorithm.
  • Test ICG in real-world. Combine OnePose with ICG.
    • Add a one-pose detector. Detector integration.
    • The current problem is that how can we combine them.
    • Create a video recorder to record the video.
    • Build a hybrid pipeline for video.
      • The current idea is to add a feature upon it.
      • An easy way to think is that we can record some keyinformation and read them when running icg.
  • Add build tools from NeRF.
  • Add Network-based detector.
  • Download a prepapre for Benchmark dataset & Prepare Tools for them. YCB-video, BOI, BOP challenge.
  • Run OnePose and ICG seperately on benchmark.
  • The current idea is to do a feature-based pure CPU method. ICG plus.
  • Change SuperTrack to zmq socket.
  • Run BundleTrack with r2d2.
  • Replace feature matching with bundletrack method.
    • Get feature generation finished.
    • Create a feature Viewer. [Wed]
    • Create Sparse feature view of the object. [Wed]
      • Understand what model should generate.
      • Render different aspects of the CAD model. [Thurs]
      • Augment the normal shader with RGB shader. [Thurs]
      • Extract Keypoint feature. [Thurs]
      • Save Keypoint feature into the model.
      • Create a sparse feature object. (Each view should have a feature.)
      • Build connection.
    • Build the feature matching. [Sun]
      • Build the feature matching pipeline and compair the result.
      • Merge the system with pfb.
        • Make the image sparse model very sparse. (Just contain multiple images with poses.)
        • I can try to improve the closestview and compute the rot-angle.
    • Combination
      • Directly do PNP.
        • Build PNP problem and solve it.
        • Render it into feature viewer.
        • The optimization and PNP seems can not work together?
        • Put it into the refiner step.
        • I should try to put it at the refiner step.
      • Integrate into loss
        • Compute Jacobian.
        • Test the system run.
        • Further test the system.

Feb.17th

  • Finish the matching process.
  • Debug ICG on YCB-V dataset to avoid the .txt requirement.

Feb.16th-Feb.18th

  • Add the feature lost.

Feb.19th to Feb.25th

  • Run result on the selected dataset.
  • Improve the method with adaptive structure.
  • Write the paper.
  • Polish the method

Feb.26th to March.1st

  • Write the paper

Debug

Declare

Our code based is based on the implementation of OnePose, ICG and Pytracking.