/all-in-one

[CVPR2023] All in One: Exploring Unified Video-Language Pre-training

Primary LanguagePython

PWC

PWC

PWC

All-in-one

Code for the paper: All in One: Exploring Unified Video-Language Pre-training Arxiv

ppl

News

  • 2022.03.25 Update Readme.
  • 2022.06.07 Release the model AllInOne+ pre-trained on Eight Dataset (YTT+WebVid+HowTo+CC3+CC12+CoCo+VG+SBU).
  • 2022.05.07 AllInOne+ is released. The main different between AllInOne is the Image and Video Co-train.
  • 2022.03.14 The first version of AllInOne is released.

Install

1. PytorchLighting

In this work, we use PytorchLighting for distributed training with mixed precision. Install pytorch and PytorchLighting first.

conda create -n allinone python=3.7
source activate allinone
cd [Path_To_This_Code]
pip install -r requirements.txt

If all packages include ffmpeg installed, please skip step 2.

2. On-the-fly decode (may skip)

To speed up the pre-training, we adopt on-the-fly decode for fast IO. Install ffmpeg as below.

1. ffmpeg

sudo conda install -y ffmpeg

Please install the required packages if not included in the requirements.txt.

If you server cannot connect to http or install ffmpeg slowly. Please download static binary file from FFmpeg Static Builds and then add to path variable, as follows:

export PATH=[PATH_TO_Dir/]ffmpeg-git-20220108-amd64-static:$PATH

2. pytorch video

Install pytorchvideo (for data augmentation) as below:

pip install ffmpeg-python
pip install pytorchvideo

Download Pretrained Weights

We provide three pretrained weights in google driver.

Model PT Data Parameter Pretrained Weight Trained Log Hparams
All-in-one-Ti Webvid+HowTo 12M Google Driver Google Driver Google Driver
All-in-one-S Webvid+HowTo 33M Google Driver Google Driver Google Driver
All-in-one-B Webvid+HowTo 110M Google Driver Google Driver Google Driver
All-in-one-B+ Webvid+HowTo+
CC3
110M Google Driver Google Driver Google Driver
All-in-one-B+ Webvid+YTT+HowTo+
CC3+CC12+Coco+VG+SBU
110M Google Driver Google Driver Google Driver

After downloaded these pretrained weights, move them into pretrained dir.

mkdir pretrained
cp *.ckpt pretrained/

Compare with state-of-the-arts

Model Param Data Frames TGIF-Action TGIF-Frame MSR R@5 MSR R@10
ClipBERT 137M I:Coco+VG 8 x 2 82.9 59.4 49.2 63.5
VIOLET 198M V:Webvid+
I:CC3
16 87.1 - 63.0 73.4
All-in-one-S 33M V:WebVid+Howto 3 91.2 64.0 61.5 70.9
All-in-one-B 110M V:WebVid+Howto 3 92.9 64.2 67.0 77.1
All-in-one-B+ 110M V:Webvid+
I:CC3
3 95.4 67.2 68.1 77.3
All-in-one-B+ 110M V:Webvid+YTT+HowTo+
I:CC3+CC12+Coco+VG+SBU
3 96.3 68.5 70.3 79.2

I is short for Image and V is short for Video in this table.

Dataset Preparation

See DATA.md

Pre-training

Full Video Pre-training

See TRAIN.md

Co-training with Image Dataset (All-in-one+)

See COTRAIN.md

Evaluation on Downstream Tasks

See EVAL.md

By unified design and sparse sampling, AllInOne show much small flops.

Citation

If you find our work helps, please cite our paper.

@article{wang2022allinone,
  title={All in One: Exploring Unified Video-Language Pre-training},
  author={Wang, Alex Jinpeng and Ge, Yixiao and Yan, Rui and Ge Yuying and Lin, Xudong and Cai, Guanyu  and Wu, Jianping and Shan, Ying and Qie, Xiaohu and Shou, Mike Zheng},
  journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Contact

Email: awinyimgprocess at gmail dot com

If you have any problem or have difficult in reproducing the results reported in this code, you can email to me or open a question in issues. We are also willing to merge the code if transfer our All-in-one to different tasks or datasets.

Acknowledgement

This work is mainly based on ViLT, Frozen and Merlot.

License

MIT