/PaReprop

Fast Parallelized Reversible Backpropagation, or PaReprop for short. Fork of PySlowFast to work with RevViT.

Primary LanguagePythonApache License 2.0Apache-2.0

PySlowFast

Parallelized Reversible Vision Transformers

parallel_ss

[Paper]

This is a fork of PySlowFast, the official codebase for the original Reversible Vision Transformer paper. In this repo, we parallelize the backward pass of Reversible Vision Transformer (RevViT) using Pytorch CUDA Streams to achieve a speedup over the base RevViT using the Two-Stream method outlined in Parallelized Reversible Vision Transformers.

As mentioned in the paper, this method does not speed up the base ViT models (and in fact uses more memory due to two streams of computation), but does offer appreciable throughput gains on the Rev-MViT line of models. To use this setting, simply set MVIT.REV.USE_STREAM to True in the config file, as follows.

MVIT:
  REV:
    USE_STREAM: True

An example of such a script is in configs/ImageNet/REV_VIT_S_STREAM.yaml file. All changes are contained in the slowfast/models/reversible_mvit.py file, so start there if you are curious as to the implementation of this method.

Main Repo

PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficient training. This repository includes implementations of the following methods:

Introduction

The goal of PySlowFast is to provide a high-performance, light-weight pytorch codebase provides state-of-the-art video backbones for video understanding research on different tasks (classification, detection, and etc). It is designed in order to support rapid implementation and evaluation of novel video research ideas. PySlowFast includes implementations of the following backbone network architectures:

  • SlowFast
  • Slow
  • C2D
  • I3D
  • Non-local Network
  • X3D
  • MViTv1 and MViTv2
  • Rev-ViT and Rev-MViT

Updates

License

PySlowFast is released under the Apache 2.0 license.

Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the PySlowFast Model Zoo.

Installation

Please find installation instructions for PyTorch and PySlowFast in INSTALL.md. You may follow the instructions in DATASET.md to prepare the datasets.

Quick Start

Follow the example in GETTING_STARTED.md to start playing video models with PySlowFast.

Visualization Tools

We offer a range of visualization tools for the train/eval/test processes, model analysis, and for running inference with trained model. More information at Visualization Tools.

Contributors

PySlowFast is written and maintained by Haoqi Fan, Yanghao Li, Bo Xiong, Wan-Yen Lo, Christoph Feichtenhofer.

Citing PySlowFast

If you find PySlowFast useful in your research, please use the following BibTeX entry for citation.

@misc{fan2020pyslowfast,
  author =       {Haoqi Fan and Yanghao Li and Bo Xiong and Wan-Yen Lo and
                  Christoph Feichtenhofer},
  title =        {PySlowFast},
  howpublished = {\url{https://github.com/facebookresearch/slowfast}},
  year =         {2020}
}