/VISA

[ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model

Primary LanguagePython

VISA: Reasoning Video Object Segmentation via Large Language Model

 GitHub stars arXiv Static Badge

🚀 Performance

VISA demonstrates remarkable proficiency in handling complex segmentation tasks that require: (a) reasoning based on world knowledge; (b) inference of future events; and (c) a comprehensive understanding of video content.

🛠️ Installation

pip install -r requirements.txt
pip install flash-attn --no-build-isolation

🦄 Training and Validation

1. Training Data Preparation

Before training, please download the datasets, and then configure the path in dataset_config.py.

LISA's Dataset

Follow LISA to prepare LISA's datasets. The dataset folder should be stored in the $LISA_ROOT folder.

LISA_ROOT
├── ade20k
├── coco
├── cocostuff
├── llava_dataset
├── mapillary
├── reason_seg
├── refer_seg
└── vlpart
Chat-UniVi's Dataset

Follow Chat-UniVi/Chat-UniVi-Instruct to prepare Chat-UniVi-Instruct datasets. The dataset folder should be stored in the $ChatUniVi_ROOT folder.

ChatUniVi_ROOT
├── Fine-tuning
│   ├── MIMIC_imageonly
│   └── VIDEO
└── ScienceQA_tuning
RVOS's Dataset
  1. Reasoning Video Segmentation Datasets: ReVOS.
  2. Referring Video Segmentation Datasets: Ref-Youtube-VOS, Ref-DAVIS17, MeViS.
  3. Open-Vocabulary Video Instance Segmentation Dataset: LV-VIS. Download mask_dict.json and meta_expressions.json from OneDrive or BaiduPan. Then, put the annotations files in the $RVOS_ROOT/lvvis/train directory as follows.
RVOS_ROOT
├── ReVOS
│   ├── JPEGImages 
│   ├── mask_dict.json             
│   ├── mask_dict_foreground.json   
│   ├── meta_expressions_train_.json 
│   └── meta_expressions_valid_.json 
├── lvvis
│   └── train
|       ├── JPEGImages
|       ├── mask_dict.json
|       └── meta_expressions.json
├── Ref-Youtube-VOS
│   ├── meta_expressions
|   |   ├── train/meta_expressions.json
|   |   └── valid/meta_expressions.json
│   ├── train
|   |   ├── JPEGImages
|   |   └── mask_dict.pkl
│   └── valid
|       └── JPEGImages
├── davis17
│   ├── meta_expressions
|   |   ├── train/meta_expressions.json
|   |   └── valid/meta_expressions.json
│   ├── train
|   |   ├── JPEGImages
|   |   └── mask_dict.pkl
│   └── valid
|       ├── JPEGImages
|       └── mask_dict.pkl
└── mevis

2. Pre-trained weights

Chat-UniVi

To train VISA-7B or 13B, you need to download Chat-UniVi weights from Chat-UniVi-7B and Chat-UniVi-13B.

SAM

Download SAM ViT-H pre-trained weights from the link.

3. Training VISA

# Training VISA-7B
bash scripts/train_7b.sh 

# Extracting fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints.
cd /PATH/TO/VISA-7B/ckpt_model && python zero_to_fp32.py . ../pytorch_model.bin

# Merge LoRA Weight
CUDA_VISIBLE_DEVICES="" python merge_lora_weights_and_save_hf_model.py \
  --version Chat-UniVi/Chat-UniVi \
  --weight /PATH/TO/VISA-7B/pytorch_model.bin \
  --save_path /PATH/TO/VISA-7B/hf_model

4. Validation

1. Using `VISA` to generate predicted mask of each video [demo]
deepspeed --master_port=24999 train_ds.py \
  --version="/PATH/TO/VISA-7B/hf_model" \
  --vision_pretrained="/PATH/TO/sam_vit_h_4b8939.pth" \
  --log_base_dir="/PATH/TO/LOG_BASE_DIR" \
  --exp_name="val_7b" \
  --balance_sample \
  --dataset="reason_seg" \
  --sample_rates="13" \
  --val_dataset "revos_valid" \
  --eval_only 
2. Using LLaMA-VID to generate target frame for each video

You can directly download the results of our run from OneDrive or BaiduPan

  • Run http_server_mp.py to build the API server for LLaMA-VID [demo]

    python utils_llamavid/llamavid_server.py \
        --vision_tower /PATH/TO/eva_vit_g.pth \
        --image_processor /PATH/TO/openai/clip-vit-large-patch14 \
        --model-path /PATH/TO/YanweiLi/llama-vid-13b-full-224-video-fps-1
  • Using the API for inference [demo]

    python utils_llamavid/llamavid_client.py \
        --video_root /PATH/TO/ReVOS/JPEGImages \
        --data_json_file /PATH/TO/ReVOS/meta_expressions_valid_.json
3. Using XMem for mask propagation [demo]
4. Evaluate ReVOS's performance [demo]
cd tools
python eval_revos.py /PATH/TO/FINAL_ANNOTATION [ARGS]

📑 Todo list

  • Release code with Text-guided Frame Sampler's Local Sampling

  • Release VISA model weights

  • Release code with Text-guided Frame Sampler's Global-Local Sampling

⭐ Cite

If you find this project useful in your research, please consider citing:

@article{yan2024visa,
  title={VISA: Reasoning Video Object Segmentation via Large Language Models},
  author={Yan, Cilin and Wang, Haochen and Yan, Shilin and Jiang, Xiaolong and Hu, Yao and Kang, Guoliang and Xie, Weidi and Gavves, Efstratios},
  journal={arXiv preprint arXiv:2407.11325},
  year={2024}
}

🎖️ Acknowledgement

This work is built upon the LLaVA, SAM, LISA, Chat-UniVi, MeViS, LLaMA-VID and XMem.