SysCV/sam-hq

Request for evaluation code

jameslahm opened this issue · 5 comments

Thank you for your great work! Would you mind sharing the evaluation code on COCO, YTVIS, HQ-YTVIS, and DAVIS? Thank you!

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU.

We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data.

For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper.
image

@ymq2017 Thank you! Would you mind sharing the evaluation code on YTVIS, HQ-YTVIS, and DAVIS? Thanks a lot!

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU.

We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data.

For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

Hi authors, thanks for your great work.
Could you provide the pre-trained checkpoint of FocalNet-DINO that you used. I think that i download the right checkpoint but i met the mismatch problem as follows.
image

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU.
We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data.
For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

Hi authors, thanks for your great work. Could you provide the pre-trained checkpoint of FocalNet-DINO that you used. I think that i download the right checkpoint but i met the mismatch problem as follows. image

Hi, we use this script for downloading the FocalNet-DINO checkpoint.

# FocalNet-L+DINO
cd ckpt
python -m wget https://projects4jw.blob.core.windows.net/focalnet/release/detection/focalnet_large_fl4_o365_finetuned_on_coco.pth -o focalnet_l_dino.pth
cd ..
python tools/convert_ckpt.py ckpt/focalnet_l_dino.pth ckpt/focalnet_l_dino.pth

@ymq2017 How much GPU memory is needed for evaluation? I try to evaluate using 'projects/configs/hdetr/swin-t-hdetr_sam-vit-b.py' , but meet the problem of out of memory on 10GB 2080Ti.