uoguelph-mlrg/theano_alexnet

Training Cost NAN

jiangqy opened this issue · 11 comments

Hi, I would like to train AlexNet on ImageNet. While after 20 iterations, training cost becomes nan.
Here are the details:

d8yaw cmo5 ynyr t_ krs

Should I set a smaller learning rate? Could you give me some suggestions?

Thank you~

hma02 commented

@jiangqy
What is your batch size and current learning rate?

I have met the same problem, and my batch size is 256, learning rate is 0.01. Do you have any ideas?

@hma02 My batch size is 256 and learning rate is 0.01, too.

hma02 commented

@jiangqy @heipangpang
Looks like you are running the single GPU train.py, then the problem is not related to weight exchanging.

The cost should be around 6.9 initially.

The unbounded cost value maybe caused by gradient explosion. I got into similar situations when initializing a deep network with arrays of large variance and mean. Too large learning rates and batch sizes may result in strong gradient zigzag as well.

Also do check the input images when loading them
to see if they are preprocessed correctly and correspond to loaded labels
You can show them using similar tricks as here. Try using a stack of image_means as input data

@hma02
I will try it. Thank you very much.

@hma02
When I check the output of every layer, I found that for the layer_input, I got a zero matrix which may be the problem why I such a large training loss.

hma02 commented

@heipangpang
Yes, this probably is the reason you got large cost. Make sure you set use_data_layer to False in config.yaml. Then the layer_input should be equal to x as shown here, which is the input batch. If x is a zero matrix, there's something wrong with the preprocessed image batches.

@hma02
But when I load the batches hand by hand in python, it seems that I can get the correct results.
Thank you very much.

@hma02
I am getting the correct results now, thank you very much.

I had the same problem here. If "para_load" is set False, I could train it normally. But I think one of great contributions of this work is the parallel loading right?

@heipangpang
Can you please share, what change exactly made it possible for you to get correct results.

As you wrote
"I am getting the correct results now, thank you very much."