microsoft/FocalNet

Issue reproducing evaluation metric for FocalNet+DINO+O365pretrain

RockeyCoss opened this issue · 2 comments

Thank you for your great work!
However, I am having difficulty reproducing the evaluation metric for the model open-sourced in link. Specifically, my evaluation results is 0.3 AP lower than that you reported in the README.
图片
My command used to run the evaluation is:

python -m torch.distributed.launch --nproc_per_node=4 main.py \
  --output_dir output/path \
	-c config/DINO/DINO_5scale_focalnet_large_fl4.py --coco_path coco/path  \
	--eval --resume checkpoint/path

Could you please help me with this issue? I would be grateful if you could provide some guidance on what I might be doing wrong, or if you could share any additional details about the exact process that you used to compute the evaluation metric.
Thank you very much!

Hi, @RockeyCoss , thanks for your interest!

I think you are using the default 800x1333 image resolution for your evaluation. Can you change your base in DINO_5scale_focalnet_large_fl4.py with https://github.com/FocalNet/FocalNet-DINO/blob/main/config/DINO/coco_transformer_hres.py?

Hi, @jwyang, would you consider merging FocalNet/FocalNet-DINO into this repo?