/Point-Transformer-Diffusion

Point Transformer Diffusion is a novel generative model for 3D point cloud generation, which integrates the classical diffusion model and a local self-attention network.

Primary LanguagePythonMIT LicenseMIT

Point-Transformer-Diffusion

Point Transformer Diffusion (PTD) integrates the standard DDPM, adapted for 3D data, with Point Transformer, a local self-attention network specifically designed for 3D point clouds.

Requirements

Make sure the following environments are installed.

python
torch
tqdm
matplotlib
trimesh
scipy
open3d
einops
pytorch_warmup

You can utilize the yaml file in the requirements folder by

conda env create -f ptd_env.yml 
conda activate ptd_env

, which will take significant time. Then, install pointops by

cd lib/pointops/
python setup.py install
cp build/lib.linux-x86_64-cpython-39/pointops_cuda.cpython-39-x86_64-linux-gnu.so .

Also, install emd for evaluation by

cd metrics/PyTorchEMD/
python setup.py install
cp build/lib.linux-x86_64-cpython-39/emd_cuda.cpython-39-x86_64-linux-gnu.so .

The code was tested on Ubuntu 22.04 LTS and GeForce RTX 3090.

Data

For generation, we use ShapeNet point cloud, which can be downloaded here.

Pretrained models

Pretrained models can be downloaded here.

Training

$ python train_generation.py --category car|chair|airplane

Please refer to the python file for optimal training parameters.

Testing

$ python train_generation.py --category car|chair|airplane --model MODEL_PATH

Results

Evaluation results of the trained models of PTD.

Category Model CD EMD
airplane airplane_2799.pth 74.19 61.48
chair chair_1499.pth 56.11 53.39
car car_2799.pth 56.39 52.69

Visualization of several generated results:

Acknowledgements

DDPM for 3D point clouds: Point-Voxel Diffusion.
Point Transformer: