/Valley

The official repository of "Video assistant towards large language model makes everything easy"

Primary LanguagePython

⛰️Valley: Video Assistant with Large Language model Enhanced abilitY

Understanding Complex Videos Relying on Large Language and Vision Models

[Project Page] [Paper][demo]

The online demo is no longer available, because we released the code for offline demo deployment

Video Assistant with Large Language model Enhanced abilitY
Ruipu Luo*, Ziwang Zhao*, Min Yang* (*Equal Contribution)


Generated by stablecog via "A cute llama with valley"

Code License Data License Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

Release

  • [7/23] 🫧W e modified the our training code to make it easier to train valley and also support the training of lora.
  • [7/5] 🫧 Release training code for valley, and upload our pretraining data
  • [6/21] 🫧 upload offline demo code.
  • [6/14] 🫧 build a share link [demo].
  • [6/13] 🫧 We uploaded model weight of Valley-13b-v1-delta.
  • [6/12] 🫧 We released Valley: Video Assistant with Large Language model Enhanced abilitY. Checkout the paper.

Todo

  • Release inference code
  • Upload weight of Valley-v1 and build a share link demo
  • Upload offline demo code
  • Release 703k pretraining data and 40k instruction tuning data
  • Upload pretrain and tuning code
  • Upload weight of Valley-GLM-6B and Valley-v3

Install

  1. Clone this repository and navigate to Valley folder
git clone https://github.com/RupertLuo/Valley.git
cd Valley
  1. Install Package
conda create -n valley python=3.10 -y
conda activate valley
pip install --upgrade pip 
pip install -e .

ValleyWeight

We release Valley delta weights weights to comply with the LLaMA model license. You can apply this delta weights to original LLaMA model weight through the instructions blew:

  1. Get the original LLaMA weights in the huggingface format by following the instructions structions here.
  2. Use the following scripts to get Valley weights by applying our delta (13b-v1).

Valley 13b v1

python3 valley/model/apply_delta.py \
    --base /path/to/llama-13b \
    --target /output/path/to/Valley-13B-v1 \
    --delta /path/to/valley-13b-v1-delta

Web UI


The framework of this webUI comes from LLaVA and FastChat, we modified a part of the code to make this demo support the input of video and images.

launch a controller

python valley/serve/controller.py

launch a model worker

python valley/serve/model_worker.py --model-path /path/to/valley-13b-v1

Ps: At present, only single card mode is supported to load the model, and at least 30G of video memory is required, so the graphics card needs at least one Tesla V100.

launch a gradio demo

python valley/serve/gradio_web_server_video.py --share

Inference Valley in Command Line

inference CLI

python3 inference/run_valley.py --model-name [PATH TO VALLEY WEIGHT] --video_file [PATH TO VIDEO] --quary [YOUR QUERY ON THE VIDEO]

Train Valley Step By Step

Inspired by LLAVA, we adopt a two-stage training method. The pre-training stage uses the Valley-webvid2M-Pretrain-703K and LLaVA-CC3M-Pretrain-595K. And fine-tune stage uses LLaVA-instruct-150K , VideoChat-instruct-11K and Valley-instruct-40K (Still generating and cleaning, Valley-13b-v1 trained from previous 2 dataset)

We modified our code for training valley and managed the model hyperparameters with yaml files. Run the following two scripts to perform valley training.

Pretrain

The llm backbone that currently supports pre-training is Llama(7b,13b), vicuna(7b,13b), stable-vicuna(13b), Llama2(chat-7b, chat-13b). You need to download these open source language model weights yourself and convert them to the huggingface format.

bash valley/train/train.sh valley/configs/experiment/valley_stage1.yaml

Finetune

bash valley/train/train.sh valley/configs/experiment/valley_stage2.yaml

Acknowledgement

  • LLaVA & MOSS: Thanks to these two repositories for providing high-quality code, our code is based on them.

Citation

If the project is helpful to your research, please consider citing our paper as follows

@misc{luo2023valley,
      title={Valley: Video Assistant with Large Language model Enhanced abilitY}, 
      author={Ruipu Luo and Ziwang Zhao and Min Yang and Junwei Dong and Minghui Qiu and Pengcheng Lu and Tao Wang and Zhongyu Wei},
      year={2023},
      eprint={2306.07207},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}