HobbitLong/PyContrast

Debug advice?

WeihongM opened this issue · 7 comments

Hello, @HobbitLong
Thanks for the package project. Can you give me some advice when debugging the intermediate output which can help me understand better. DDP seems hard to add breakpoint.
Thanks.

I don't know about better advice, but maybe you can try to print out the intermediate results, and run some unity test.

Ok, thanks four your reply.

Hello, @HobbitLong
I met some problem when training custom dataset.
I use MNIST dataset as a toy example. The accuracy of contrast Learning quickly reach 100% in just 5 epoch. However, when I start to use linear trainer, the loss is so large, and the Top-1 accuracy is just 11%.
Can you explain why this happen?
Also hope you can give a toy example on smaller dataset (cifar-10), Imagenet dataset is too large for me.
Thanks!
image

@WeihongM, would tuning the learning rate of linear classifier work?

Adding a toy example of CIFAR is a good suggestion, and I will consider it!

@HobbitLong
I recently train again using cifar-10 dataset, the best line evaluation performance is only 67% (The model of MOCO is used in my experiment, and the encoder is renet18). However, the performane in SimCLR on cifar-10 can achieve a linear evaluation accuracy of 94.0%. Actually, this phenomenon is the same with the CMC issue CMC cifar10 issue.
Can you share the performance you reach when train on cifar-10 dataset? Maybe there are some parameters to adjust, and self-supervised learning not work in every scenes?

I can reach above 93% on Cifar-10 with SimCLR, see another repo here

I think you need to tune the parameters of MoCo for CIFAR-10. But 67% is way too low

@HobbitLong Glad to receive your reply, we seem to have a time difference of 12 hours.
I will check it in your project.
I found that self-supervised learning takes more time in parameter adjustment, and contrastive learning does not necessarily achieve better results in all dataset than direct supervised learning training. Do you agree with me?
Thanks!