About Ranking Loss
Burrocode opened this issue · 3 comments
I try to reproduce your job on coco dataset by training from scratch based on the suggested settings.
Namespace(batch_size=128, batch_size_eval=32, ckpt='', cnn_type='resnet152', crop_size=224, data_name='coco', data_path='***', debug=False, div_weight=0.1, dropout=0.0, embed_size=1024, eval_on_gpu=False, grad_clip=2.0, img_attention=True, img_finetune=True, legacy=True, log_file='***', log_step=10, logger_name='***', lr=2e-04, margin=0.1, max_video_length=1, max_violation=True, mmd_weight=0.01, num_embeds=2, num_epochs=10, order=False, txt_attention=True, txt_finetune=True, val_metric='rsum', vocab_path='***', weight_decay=0.0, wemb_type='glove', word_dim=300, workers=16)
However, I find that triplet ranking loss is converged to the margin and the final performance is bad.
How can you fix it? Any help would be appreciated. Thank you!
Are you using the same version of pytorch etc as specified in requirements.txt
? I have not retrained the model with that parameter setting using the latest versions so not sure you will get identical results if you are using them.
Closing due to inactivity. Feel free to bring this up again.
I try to reproduce your model on coco using the command:
python3 train.py --data_name coco --cnn_type resnet152 --wemb_type glove --margin 0.1 --max_violation --num_embeds 2 --img_attention --txt_attention --mmd_weight 0.01 --div_weight 0.1 --batch_size 256
with the pytorch 1.1.0, torchvision 0.3.0 on single RTX2080Ti.
But the loss stays at 0.2001 since epoch 4, and the final performance is bad.
The evaluation result of your provided checkpoint is OK, so I don't think it's the problem with the environment.
Is there any suggestion on this problem?
Any help would be appreciated. Thank you!