/zeroflow

Official repository for ZeroFlow: Scalable Scene Flow via Distillation

Primary LanguagePythonMIT LicenseMIT

ZeroFlow: Scalable Scene Flow via Distillation

Kyle Vedder, Neehar Peri, Nathaniel Chodosh, Ishan Khatri, Eric Eaton, Dinesh Jayaraman, Yang Liu, Deva Ramanan, and James Hays

Project webpage: vedder.io/zeroflow

arXiv link: arxiv.org/abs/2305.10424

News:

  • Feb 12th, 2024: This codebase has been deprecated. For new development, please use SceneFlowZoo which is based on a clean version of this codebase.
  • Feb 12th, 2024: The Getting Started has been updated with a link to our NSFP pseudolabels.
  • Jan 16th, 2024: ZeroFlow has been accepted to ICLR 2024!
  • July 31st, 2023: The ZeroFlow XL student model is now state-of-the-art on the AV2 2023 Scene Flow Challenge! See the Getting Started document for details on setting up training on additional data.
  • June 18th, 2023: ZeroFlow was selected as a highlighted method in the CVPR 2023 Workshop on Autonomous Driving Scene Flow Challenge!

Citation:

@article{Vedder2024zeroflow,
    author    = {Kyle Vedder and Neehar Peri and Nathaniel Chodosh and Ishan Khatri and Eric Eaton and Dinesh Jayaraman and Yang Liu Deva Ramanan and James Hays},
    title     = {{ZeroFlow: Fast Zero Label Scene Flow via Distillation}},
    journal   = {International Conference on Learning Representations (ICLR)},
    year      = {2024},
}

Pre-requisites / Getting Started

Read the Getting Started doc for detailed instructions to setup the AV2 and Waymo Open datasets and use the prepared docker environments.

Pretrained weights

All trained weights from the paper are available for download from this repo.

Training a model

Inside the main container (./launch.sh), run the train_pl.py with a path to a config (inside configs/) and optionally specify any number of GPUs (defaults to all GPUs on the system).

python train_pl.py <my config path> --gpus <num gpus>

The script will start by verifying the val dataloader works, and then launch the train job.

Testing a model

Inside the main (./launch.sh), run the train_pl.py with a path to a config (inside configs/), a path to a checkpoint, and the number of GPUs (defaults to a single GPU).

python test_pl.py <my config path> <my checkpoint path> --gpus <num gpus>

Generating paper plots

After all relevant checkpoints have been tested, thus generating result files in validation_results/configs/..., run plot_performance.py to generate the figures and tables used in the paper.

  1. Dump the outputs of the model
    • configs/fastflow3d/argo/nsfp_distilatation_dump_output.py to dump the val set result
    • configs/fastflow3d/argo/nsfp_distilatation_dump_output_test.py to dump the test set result
  2. Convert to the competition submission format (av2_scene_flow_competition_submit.py) using the official evaluation point subset
  3. Use official zip make_submission_archive.py file (python /av2-api/src/av2/evaluation/scene_flow/make_submission_archive.py <path to step 2 results> /efs/argoverse2/test_official_masks.zip)