/PoseEstimation_pipeline

An undergraduate thesis project.

Primary LanguagePython

Undergraduate thesis project.

Setup

Please start by installing mamba, Miniconda3 or conda with Python3.9 or above.

either run <mamba/conda> install -f environment.yml or install dependencies manually:

Manual dependency installation Instal the following dependencies (Conda/Mamba or pip):
  • Pytorch3D
  • numpy, opencv, trimesh, pyrender, scikit-image

At the time of writing, pip only.

  • Pyrealsense (If using realsense camera for RGBD)
    • pip install pyrealsense2==2.50.0.3812
  • pip install open3d

Download Model weights for OVE6D

mkdir checkpoints; cd checkpoints

Pose estimation weights \

  • wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1aXkYOpvka5VAPYUYuHaCMp0nIvzxhW9X' -O OVE6D_pose_model.pth
    or
  • wget https://drive.proton.me/urls/2GQBGB2DH4#aLLLp43rOm8M -O OVE6D_pose_model.pth

or manually from: OVE6D: Project page - https://drive.google.com/drive/folders/16f2xOjQszVY4aC-oVboAD-Z40Aajoc1s?usp=sharing).

To experiment with custom objects

  • Provide 3D model of the query object in *.ply format in Dataspace/<dataset_name>/models_eval/

    • Dataset name defined in configs/config.py as DATASET_NAME.
    • Provide name and diameter of 3D model in Dataspace/<dataset_name>/models_eval/models_info.json.
    • Adjust MODEL_SCALING in config.py to whatever scale (meters/mm) you're using for the 3D models.
  • Attach a realsense camera or implement a camera module for camera of choice as in cam_control.py at utils/cam_control.py

  • Setup a greenscreen and adjust the chroma keying parameters in the pop up window at runtime.

    • Alternatively use a segmentator of choice that provides a binary mask as in utility\load_segmentation_model_chroma.py.

Some qualitative results

drone.mp4
pot.mp4

Acknowledgements