yeephycho/nasnet-tensorflow

fine-tuning nasnet-large and get low accuracy

neuhzhj2012 opened this issue · 3 comments

Hi, @yeephycho
I follow your ways to fine-tuning nasnet-large on my own datasets. After 90,000 steps training, I get top1=0.026 and top5=0.098. But the result with Inception_v4 for the same datasets is top1=0.8 and top5=0.93.
I want to know if you fine-tune nasnet-large net on customized datasets. Is there any problems when training net?
Thanks

Hi, @neuhzhj2012
Sorry for the late reply due to my vacation.
Yes, I also encountered this scenario before and still working on it. This is a feature not a bug, I believe that the problem is lying in the input pre-processing part, check the input resize and crop part, that will be helpful to solve the problem.
If the input is correct, the inference result should be as good as expected.

So, I will close this issue cause people should be able to repeat the documented procedure of flower dataset training and evaluation, this issue is solution dependent and I will not merge the related changes into the repo. later.

If my suggestion solved your problem, please let me know if it is convenient for you. So that other people can see the direction to solve their own problems.

With thanks and regards!
Yours Yeephycho

Hi, @neuhzhj2012
Have you trained for all layers? large model ? your GPU model ? batch size ? tfrecord or feed_dict ?

I am finetuning nasnet large model and receive low accuracy. I did a simple change which is removing the random cropping preprocessing function. This data augmentation method makes the training a hard time to converge since the cropped image at each epoch is totally different from each other. I dont understand why this random cropping method is used as the official way of Inception v3 preprocessing. By removing this, I obtained a much better result.