This is the official implementation of CVPR2020 paper: "Context-aware and Scale-insensitive Temporal Repetition Counting"
This code is implemented based on the project"3D ResNets for Action Recognition".
- PyTorch (ver. 1.0)
- Python 2
- Please download the UCF101 dataset here.
- Convert UCF101 videos from avi to png files, put the png files to data/ori_data/ucf526/imgs/train
- Create soft link with following commands:
cd data/ori_data/ucf526/imgs
ln -s train val
- Please download the anotations (Google Drive,or Baidu Netdisk code:n5za), and put it to data/ori_data/ucf526/annotations
- Please download the QUVA dataset in: http://tomrunia.github.io/projects/repetition/
- Put the label files to data/ori_data/QUVA/annotations/val
- Convert QUVA videos to png files, put the png files to data/ori_data/QUVA/imgs
- Please download the YTSeg dataset in: https://github.com/ofirlevy/repcount
- Put the label files to data/ori_data/YT_seg/annotations
- Convert YTsegments videos to png files, put the png files to data/ori_data/YT_seg/imgs
Train from scratch
python main.py
If you want to finetune the model pretrained on Kinetics, first you need to download the pretrained model in here and run:
python main.py --pretrain_path = pretrained_model_path
You can also run the trained model provide by ours (Google Drive or Baidu Netdisk code:na81):
python main.py --no_train --resume_path = trained_model_path
If you use this code or pre-trained models, please cite the following:
@InProceedings{Zhang_2020_CVPR,
author = {Zhang, Huaidong and Xu, Xuemiao and Han, Guoqiang and He, Shengfeng},
title = {Context-Aware and Scale-Insensitive Temporal Repetition Counting},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}