This repository is the source code for paper Context-PIPs: Persistent Independent Particles Demands Spatial Context Features, NeurIPS 2023.
Weikang Bian*, Zhaoyang Huang*, Xiaoyu Shi, Yitong Dong, Yijin Li, Hongsheng Li ( * denotes equal contributions.)
[Paper] [Project Page]
conda create --name context_pips python=3.10
conda activate context_pips
conda install pytorch=1.12.0 torchvision=0.13.0 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
To evaluate/train our Context-PIPs, you will need to download the following datasets.
- FlyingThings++
- CroHD
- TAP-Vid (optional)
You can create symbolic links to wherever the datasets were downloaded in the data
folder.
├── data
├── flyingthings
├── frames_cleanpass_webp
├── object_index
├── occluders_al
├── optical_flow
├── trajs_ad
├── HT21
├── test
├── train
We provide a model for evaluation.
# Evaluate Context-PIPs on FlyingThings++
python test_on_flt.py --init_dir path_to_checkpoint_folder
# Evaluate Context-PIPs on CroHD
# Occluded
python test_on_crohd.py --init_dir path_to_checkpoint_folder
# Visible
python test_on_crohd.py --init_dir path_to_checkpoint_folder --req_occlusion False
Similiar to PIPs, we train our model on the FlyingThings++ dataset:
python train.py \
--horz_flip=True --vert_flip=True \
--device_ids=\[0,1,2,3,4,5,6,7\] \
--exp_name contextpips \
--B 4 --N 128 --I 6 --lr 3e-4
In this project, we use parts of code in:
Thanks to the authors for open sourcing their code.
@inproceedings{
bian2023contextpips,
title={Context-{PIP}s: Persistent Independent Particles Demands Context Features},
author={Weikang BIAN and Zhaoyang Huang and Xiaoyu Shi and Yitong Dong and Yijin Li and Hongsheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cnpkzQZaLU}
}