#Preview
The 10 categories collected in this dataset. See discovery demo here.
#Descrptions
This Co-Sum dataset serves as a benchmark to validate video co-summarization techniques, where the goal is to create video summaries given a video collection of the same topic. The dataset is collected from YouTube using 10 queries, in total 51 videos of 147m40s. We release the video URLs, proprocessed shot indices and annotations for reproducibility of our results.
This dataset has been evaluated on two tasks:
- Adaptive video summarization: Create summaries for each video adaptive to a query string
- Concept visualization: Generate visual (video) concepts from a query string (eg, "surf" and "bike polo")
More info:
- Links: project page | evaluation page | paper (2.3M) | poster (14M) | slides (5.6M) | supp (12M)
- Contact: Please send comments to Wen-Sheng Chu (wschu@cmu.edu)
- Citation: Please cite the following paper if you use this dataset in a publication:
@inproceedings{chu2015video,
title={Video co-summarization: Video summarization by visual co-occurrence},
author={Chu, Wen-Sheng and Song, Yale and Jaimes, Alejandro},
booktitle={CVPR},
year={2015}
}
#Shot Indices
The shot indices used in the paper can be found in the shots/
directory.
Note that the indices can be post-processed into smaller shots to avoid too lengthy shots (see Sec 3.1 in paper).
#Video URLs
01: Base jump
02: Bike polo
03: Eiffel tower
04: Excavators river cross
05: Kids play in leaves
06: MLB
07: NFL
08: Notre dame cathedral
09: Statue of liberty
10: Surf