This is a PyTorch implementation of the MeMViT paper (CVPR 2022 oral):
@inproceedings{memvit2022,
title={{MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition}},
author={Wu, Chao-Yuan and Li, Yanghao and Mangalam, Karttikeya and Fan, Haoqi and Xiong, Bo and Malik, Jitendra and Feichtenhofer, Christoph},
booktitle={CVPR},
year={2022}
}
MeMViT builds on the MViT models:
@inproceedings{li2021improved,
title={{MViTv2}: Improved multiscale vision transformers for classification and detection},
author={Li, Yanghao and Wu, Chao-Yuan and Fan, Haoqi and Mangalam, Karttikeya and Xiong, Bo and Malik, Jitendra and Feichtenhofer, Christoph},
booktitle={CVPR},
year={2022}
}
@inproceedings{fan2021multiscale,
title={Multiscale vision transformers},
author={Fan, Haoqi and Xiong, Bo and Mangalam, Karttikeya and Li, Yanghao and Yan, Zhicheng and Malik, Jitendra and Feichtenhofer, Christoph},
booktitle={ICCV},
year={2021}
}
On the AVA dataset:
name | mAP | #params (M) | GFLOPs | pre-train model | model |
---|---|---|---|---|---|
MeMViT-16, 16x4 | 29.3 | 35.4 | 58.7 | K400-pretrained model | model |
MeMViT-24, 32x3 | 32.3 | 52.6 | 211.7 | K600-pretrained model | model |
MeMViT-24, 32x3 | 34.4 | 52.6 | 211.7 | K700-pretrained model | model |
This repo is a modification on the PySlowFast repo. Installation and preparation follow that repo.
Please modify the data paths and pre-training checkpoint path in config file accordingly and run, e.g.,
python tools/run_net.py \
--cfg configs/AVA/MeMViT_16_K400.yaml \
To evaluate a pretrained MeMViT model:
python tools/run_net.py \
--cfg configs/AVA/MeMViT_16_K400.yaml \
TRAIN.ENABLE False \
TEST.CHECKPOINT_FILE_PATH path_to_your_checkpoint \
This repository is built based on the PySlowFast.
MeMViT is released under the CC-BY-NC 4.0.