/video-pretrained-transformer

Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scratch on YouTube (YT-1B dataset).

Primary LanguageJupyter NotebookMIT LicenseMIT

Watchers