Please note VQ test annotations (for the challenge) were recently released. If needed, please download the annotations dataset again, e.g. python -m ego4d.cli.cli --output_directory="~/ego4d_data" --datasets annotations
EGO4D is the world's largest egocentric (first person) video ML dataset and benchmark suite, with 3,600 hrs (and counting) of densely narrated video and a wide range of annotations across five new benchmark tasks. It covers hundreds of scenarios (household, outdoor, workplace, leisure, etc.) of daily life activity captured in-the-wild by 926 unique camera wearers from 74 worldwide locations and 9 different countries. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. The approach to data collection was designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant.
Public Documentation/Start Here: Ego4D Docs
For the CLI readme (to download/access): CLI README
For a demo notebook: Annotation Notebook
For the visualization engine: Viz README
For feature extraction: Feature README
Ego4D is released under the MIT License.