Loss stays constant?
petteriTeikari opened this issue · 1 comments
petteriTeikari commented
Hi again @yaringal
Have you run your demo recently for CIFAR10 dataset with Caffe as the loss never goes down when I train and I got all the way until 56,000 iterations :) Could there be more changes in the Caffe backend
I0616 00:24:30.437563 10711 solver.cpp:228] Iteration 55900, loss = 87.3365
I0616 00:24:30.437683 10711 solver.cpp:244] Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
I0616 00:24:30.437700 10711 sgd_solver.cpp:106] Iteration 55900, lr = 0.00243129
I0616 00:26:52.689358 10711 solver.cpp:404] Test net output #0: accuracy = 0.1
I0616 00:26:52.689466 10711 solver.cpp:404] Test net output #1: loss = 87.3365 (* 1 = 87.3365 loss)
I0616 00:26:53.020918 10711 solver.cpp:228] Iteration 56000, loss = 87.3365
I0616 00:26:53.020978 10711 solver.cpp:244] Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
I0616 00:26:53.020994 10711 sgd_solver.cpp:106] Iteration 56000, lr = 0.00242852
When trying the train_quick.sh
that comes with Caffe for cifar10, the loss is going down with iterations.
I am not that familiar with Caffe so I could not spot the error from your config files: lenet_dropout_solver.prototxt
(in caffe/examples/cifar10
)
lenet_dropout_train_test.prototxt
(in caffe/examples/cifar10/cifar10_uncertainty
)
yaringal commented
Please use Caffe commit https://github.com/BVLC/caffe/tree/12475b9560ee44b65b79cfa547ad7e3d35e8d3de