/gen6d-steps

reproducing Gen6d custom object results

Gen6D Setup & Run

Goals

  • Setup environment for COLMAP
  • Setup environment for Gen6D
  • Download all required data
  • Load custom object in the model
  • Allow prediction commands

Installation

  1. Clone the Gen6D repository
git clone https://github.com/liuyuan-pal/Gen6D.git
  1. Create a virtual environment with Python 3.6 using either pip/conda. Install all of the packages listed in Gen6D/requirements.txt.
  2. Install COLMAP for Windows. Note: make sure to install the CUDA version.
  3. Download pretrained models. The file structure should look like:
Gen6D
|-- data
    |-- model
        |-- detector_pretrain
            |-- model_best.pth
        |-- selector_pretrain
            |-- model_best.pth
        |-- refiner_pretrain
            |-- model_best.pth
  1. Create a sub-folder under data called custom. Create another folder inside custom called flourbag.
  2. Download the point cloud file object_point_cloud.ply and meta info file meta_info.txt. Place it in the Gen6D folder like this:
Gen6D
|-- data
    |-- custom
       |-- flourbag
           |-- object_point_cloud.ply  # object point cloud
           |-- meta_info.txt           # meta information about z+/x+ directions
  1. Download the images & colmap folders (the colmap folder from here). Place them at the same level as the point cloud and meta info files.
Gen6D
|-- data
    |-- custom
       |-- flourbag
           |-- object_point_cloud.ply  # object point cloud
           |-- meta_info.txt           # meta information about z+/x+ directions
           |-- images                  # images
           |-- colmap                  # colmap project
  1. Install ffmpeg from here.

Prediction

  1. Make sure the structure of the folder looks like this:
Gen6D
|-- data
    |-- custom
       |-- flourbag
           |-- object_point_cloud.ply  # object point cloud
           |-- meta_info.txt           # meta information about z+/x+ directions
           |-- images                  # images
           |-- colmap                  # colmap project
       |-- video                       # create this new folder
           |-- <test video>.mp4        # add your test videos in this folder
    |-- model
       |-- detector_pretrain
           |-- model_best.pth
       |-- selector_pretrain
           |-- model_best.pth
       |-- refiner_pretrain
           |-- model_best.pth
|-- configs
|-- train
... etc

Here are some test videos: one, two.

  1. Run the following command with the appropriate parameters.
python3 predict.py --cfg configs/gen6d_pretrain.yaml --database custom/flourbag  --video <path-to-video-mp4> --resolution 960 --transpose --output data/custom/flourbag/test --ffmpeg <path-to-ffmpeg-exe>
  1. Find your output in data/custom/flourbag/test/!