/DronePose

Code for DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss (ECCVW 2020)

Primary LanguagePythonMIT LicenseMIT

DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss

Paper Conference Workshop Project Page

TODO:

  • Train scripts
  • Evaluation scripts
  • Pre-trained model
  • Smooth silhoutte loss code
  • Inference code

Data

The exocentric data used to train our single shot pose estimation model, are available here and are part of a larger dataset that contains rendered color images, silhouette masks , depth , normal maps, and optical flow for each viewpoint (e.g. user and UAV). NOTE: The data should follow the same organisation structure.

Requirements

The code is based on PyTorch and has been tested with Python 3.6 and CUDA 10.1. We recommend setting up a virtual environment (follow the virtualenv documentation) for installing PyTorch and the other necessary Python packages.

Note

For running the inference code, Kaolin(0.1.0) is needed.

Train scripts

You can train your models by running python train.py with the following arguments:

  • --root_path: Specifies the root path of the data.
  • --trajectory_path: Specifies the trajectory path.
  • --drone_list: The drone model from which data will be used.
  • --view_list: The camera view (i.e. UAV or user) from which data will be loaded.
  • --frame_list: The frames (i.e. 0 or 1) that will be loaded.
  • --types_list: The different modalities (e.g. colour,depth,silhouette) that will be loaded from the dataset.
  • --saved_models_path: Path where models are saved.

Pre-trained Models

Our PyTorch pre-trained models (corresponding to those reported in the paper) are available at our releases and contain these model variants:

Inference

You can try any of the above models by using our infer.py script and by setting the below arguments:

  • --input_path: Path to the root folder containing the images.
  • --output_path: Path for saving the final result.
  • --weights: Path to the pretrained model.

In-the-wild (YouTube videos) Results