/cage

[CVPR 2024] Official Implementation of the paper "CAGE: Controllable Articulation GEneration"

Primary LanguagePythonMIT LicenseMIT

CAGE

PyTorch Lightning

CAGE: Controllable Articulation GEneration

Jiayi Liu, Hou In Ivan Tam, Ali Mahdavi-Amiri, Manolis Savva

CVPR 2024

drawing

Page | Paper | Data (alternative link for data: OneDrive)

Setup

We recommend the use of miniconda to manage system dependencies. The environment was tested on Ubuntu 20.04.4 LTS.

# Create a conda environment
conda create -n cage python=3.10
conda activate cage

# Install pytorch
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

# Install PyGraphviz
conda install --channel conda-forge pygraphviz

# Install other packages
pip install -r requirements.txt

# Install PyTorch3D (not required for training):
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d

Data

We share the training data (here~101MB) preprocessed from PartNet-Mobility dataset. Once downloaded, extract the data and put it directly in the project folder. The data root can be configured with system.datamodule.root=<path/to/your/data/directory> in configs/cage.yaml file. If you find it slow to download the data from our server, please try this alternative link on OneDrive.

Quick Demo

We share the pretrained model (here~80MB) so you can try our demo real quick. Once downloaded, extract the zip file and put it under <project folder>/exps folder. Since our part retrieval relies on the meshes in the dataset, the data should be already downloaded and put under the project folder by default. Run python demo.py to start the demo (with a single GPU is preferred). Please see demo.py for further instructions on the script arguments.

Training

Run python main.py --config configs/cage.yaml --log_dir <folder/for/logs> to train the model from the scratch. The experiment files will be recorded at ./<log_dir>/cage/<version>. The original model was trained on two NVIDIA A40 GPUs.

Citation

Please cite our work if you find it helpful:

@inproceedings{liu2024cage,
    title={CAGE: Controllable Articulation GEneration},
    author={Liu, Jiayi and Tam, Hou In Ivan and Mahdavi-Amiri, Ali and Savva, Manolis},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={17880--17889},
    year={2024}
}

Acknowledgements

This implementation is partially powered by 🤗Diffusers.