/Generation3D

3D Shape Generation Baselines in PyTorch.

Primary LanguagePython

Generation3D

3D Shape Generation Baselines in PyTorch.

Feature

  • Hack of DataParallel for balanced memory usage
  • More Models WIP
  • Configurable model parameters
  • Customizable model, dataset

Representation

  • 💎 Polygonal Mesh
  • 👾 Volumetric
  • 🎲 Point Cloud
  • 🎯 Implicit Function
  • 💊 Primitive

Input Observation

  • 🏞 RGB Image
  • 📡 Depth Image
  • 👾 Voxel
  • 🎲 Point Cloud
  • 🎰 Unconditional Random

Evaluation Metrics

  • Chamfer Distance
  • F-score
  • IoU

Model Zoo

  • 💎 Pixel2Mesh
  • 🎯 DISN
  • 👾 3DGAN
  • 👾 Voxel Based Method
  • 🎲 PointCloud Based Method

Get Started

Environment

  • Ubuntu 16.04 / 18.04
  • Pytorch 1.3.1
  • CUDA 10
  • conda > 4.6.2

Using Anaconda to install all dependences.

conda env create -f environment.yml

Train

CUDA_VISIBLE_DEVICES=<gpus> python train.py --options <config>

Predict

CUDA_VISIBLE_DEVICES=<gpus> python predictor.py --options <config>

Evaluation [WIP]

Custom guide

  • custom scheduler for training/inference loop, add code in scheduler and inherit base class.
  • custom model in models/zoo
  • custom config options in utils/config
  • custom dataset in datasets/data

External

  • Chamfer Distance

Baselines

Pixel2Mesh 🏞 💎

  • Input: RGB Image
  • Representation: Mesh
  • Output: Mesh camera-view

DISN 🏞 🎯

  • Input: RGB Image
  • Representation: SDF
  • Post-processing: Marching Cube
  • Output: Mesh camera-view

3DGAN 🎰 👾

  • Input: Random Noise
  • Representation: Volumetric
  • Output: Voxel

Acknowledgements

Our work is based on the codebase of an unofficial pixel2mesh framework. The Chamfer loss code is based on ChamferDistancePytorch.

Official baseline code

License

Please follow the License of official implementation for each model.