Given a long video, we turn it into a doc containing visual + audio info. By sending this doc to ChatGPT, we can chat over the video!
- 23/April/2023: We release Huggingface gradio demo!
- 20/April/2023: We release our project on github and local gradio demo!
Done
- LLM Reasoner: ChatGPT (multilingual) + LangChain
- Vision Captioner: BLIP2 + GRIT
- ASR Translator: Whisper (multilingual)
- Video Segmenter: KTS
- Huggingface Space
Doing
- Optimize the codebase efficiency
- Improve Vision Models: MiniGPT-4 / LLaVA, Family of Segment-anything
- Improve ASR Translator for better alignment
- Introduce Temporal dependency
- Replace ChatGPT with own trained LLM
Please find installation instructions in install.md.
python main.py --video_path examples/buy_watermelon.mp4 --openai_api_key xxxxx
The generated video document will be generated and saved in examples/buy_watermelon.log
python main_gradio.py --openai_api_key xxxxx
Stay tuned for our project π₯
If you have more suggestions or functions need to be implemented in this codebase, feel free to drop us an email kevin.qh.lin@gmail.com
, leiwx52@gmail.com
or open an issue.
This work is based on ChatGPT, BLIP2, GRIT, KTS, Whisper, LangChain, Image2Paragraph.