/InternVideo

[ECCV2024] Video Foundation Models & Data for Multimodal Understanding

Primary LanguagePythonApache License 2.0Apache-2.0

InternVideo: Video Foundation Models for Multimodal Understanding


internvideo2_performance.

This repo contains InternVideo series and related works in video foundation models.

  • InternVideo: general video foundation models via generative and discriminative learning
  • InternVideo2: scaling video foundation models for multimodal video understanding
  • InternVid: a large-scale video-text dataset for multimodal understanding and generation

Updates

  • 2024.06: The full version of the video annotation (230M video-text pairs) for InternVid (OpenDataLab | HuggingFace) is released.
  • 2024.04: The Checkpoints and scripts of Intenrvideo2 are released.
  • 2024.03: The technical report of InternVideo2 is released.
  • 2024.01: InternVid (a video-text dataset for video understanding and generation) has been accepted for spotlight presentation of ICLR 2024.
  • 2023.07: A video-text dataset InternVid is released at here for facilitating multimodal understanding and generation.
  • 2023.05: Video instruction data are released at here for tuning end-to-end video-centric multimodal dialogue systems like VideoChat.
  • 2023.01: The code & models of InternVideo are released.
  • 2022.12: The technical report of InternVideo is released.
  • 2022.09: Press releases of InternVideo (official | 163 news | qq news).

Contact

  • If you have any questions during the trial, running or deployment, feel free to join our WeChat group discussion! If you have any ideas or suggestions for the project, you are also welcome to join our WeChat group discussion!
wechatgroup
  • We are hiring researchers, engineers and interns in General Vision Group, Shanghai AI Lab. If you are interested in working with us on video foundation models and related topics, please contact Yi Wang (wangyi@pjlab.org.cn).