Yejin0111/ADD-GCN

mAP in VOC2007

Closed this issue · 19 comments

args.seed = 1
args.lr = 0.05
args.image_size = 448
args.batch_size = 16 * 2
args.epoch_step = [30, 40]
the test size is 576
I followed the configuration mentioned above and used the model that trained on COCO as the pre-train model for Pascal VOC,but best mAP of test of VOC2007 is 94.04%.How can I overcome the problem?

args.seed = 1
args.lr = 0.05
args.image_size = 448
args.batch_size = 16 * 2
args.epoch_step = [30, 40]
the test size is 576
I followed the configuration mentioned above and used the model that trained on COCO as the pre-train model for Pascal VOC,but best mAP of test of VOC2007 is 94.04%.How can I overcome the problem?

I used the model that trained on COCO as the pre-train model for Pascal VOC. You can set --resume {Your model path that trained for COCO} for voc training.

Thank you so much !

  1. Yes. 2. No, only pretrained weights are necessary. Wang Ziyuan notifications@github.com 于2021年1月26日周二 下午8:28写道:

    As a new graduate student,I have two problems 1. how can args.batch_size be 16 * 2 ? did you mean you use 2 gpus, or set the batch_size=32? 2. I set "--resume" of the checkpoint on mscoco, do i need to load the "start epoch" from checkpoint? — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALZ7PHWQQ7SRQNFYHKCCXV3S32YPNANCNFSM4VRQLP6A .

I used ML-GCN model that trained on COCO as the pre-train model for Pascal VOC2007 and the best mAP of test of VOC2007 is 95.58%

I used ML-GCN model that trained on COCO as the pre-train model for Pascal VOC2007 and the best mAP of test of VOC2007 is 95.58%

Hello, sorrowyn. I get the similar result. But i wonder the mAP you got in COCO by MLGCN. With the setting (the way of data augmentation) of this paper, i can just obtain about 83% mAP in MLGCN and a little better in ADDGCN. Do you got the similar results? Thanks.

I used ML-GCN model that trained on COCO as the pre-train model for Pascal VOC2007 and the best mAP of test of VOC2007 is 95.58%

Hello, sorrowyn. I get the similar result. But i wonder the mAP you got in COCO by MLGCN. With the setting (the way of data augmentation) of this paper, i can just obtain about 83% mAP in MLGCN and a little better in ADDGCN. Do you got the similar results? Thanks.
On coco, I did not retrain ADDGCN, because I know it is difficult to achieve the results given by the author, the results will always be a little error.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I try ADDGCN on COCO and get 82.6mAP。
image

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I try ADDGCN on COCO and get 82.6mAP。
image

I got 82.51mAP without resize to 576

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I try ADDGCN on COCO and get 82.6mAP。
image

Thanks for your reply. I am so curious which trick should be made to achieve the mAP presented in this paper.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I try ADDGCN on COCO and get 82.6mAP。
image

I got 82.51mAP without resize to 576

Thanks for your reply

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I try ADDGCN on COCO and get 82.6mAP。
image

I got 82.51mAP without resize to 576

Thanks for your share. These experiments on COCO really upset me.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks.

图片
图片
image size in train is 448 and in test is 576.
But ML-GCN can only reach 82.x mAP(<82.5)
ADDGCN is an excellent framework.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks.

图片
图片
image size in train is 448 and in test is 576.
But ML-GCN can only reach 82.x mAP(<82.5)
ADDGCN is an excellent framework.

Thanks, sorrowyn. By the way, did you train ML-GCN under the framwork of ADDGCN? There are some differences in data augmentation in the part of image cropping which really can boost the performance about 1% mAP used in ADDGCN.

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks.

图片
图片
image size in train is 448 and in test is 576.
But ML-GCN can only reach 82.x mAP(<82.5)
ADDGCN is an excellent framework.

Using this augmentation, ML_GCN can also got 83.9 mAP, ADDGCN is no better than ML_GCN.
By the way, the author obtain the 84.2 in 448 size on README, Can anyone reproduce it?
image

Ok, thanks a lot. Did you try to train ML-GCN or ADDGCN on COCO with the pre-training parameters of backbone (ResNet101) from ImageNet directly?

I will try.
This will take some time.

Ok, thanks.

图片
图片
image size in train is 448 and in test is 576.
But ML-GCN can only reach 82.x mAP(<82.5)
ADDGCN is an excellent framework.

Using this augmentation, ML_GCN can also got 83.9 mAP, ADDGCN is no better than ML_GCN.
By the way, the author obtain the 84.2 in 448 size on README, Can anyone reproduce it?
image

Yes, thanks. I can not reproduce it too. So I really wonder how to reach it. As well as, I think the backbone ResNet101 with the same setting could also achieve the similar performance comparing to ML-GCN or ADDGCN.

I think the ADDGCN might have use the ResNet-101 pretrained on ImageNet with some data-augmentations (e.g. cut-out or grid mask) that the better pretrained backbone could boost the multi-label's performance, or use a good random seed?