Luvata/CS330-Stanford-Deep-Multi-Task-and-Meta-Learning

Loss is not going down hw1.py

jmamath opened this issue · 3 comments

Hello,

I tried to implement the assignment of HW1 by myself, and I was getting a flat loss and a flat accuracy around 20% in the default case, i.e: num_classes=5, num_classes=1, meta_batch_size=16.

Now I used your tensorflow implementation and got the same result.

Is this expected ?

Thanks.

It's expected ! But the losses actually start to go down from epochs 3000 ~ 4000
How many epochs did you train your model ?

Ok I see. I trained it just for 3200 epochs. When training for the full 50,000 epochs I can see the network learning. Thanks.

Glad it works! Peace.