14k times------loss suddenly increase
Opened this issue · 3 comments
Why does the value of loss suddenly increase and explode when training 14,000 times or so?
The value of loss increased from 1.2 to 16 000
I'm sure there's a problem with the procedure. The gradient will explode with more training times, and the loss will become super-large. I trained twice, once in 14,000 explosions and once in 18,000 explosions.
Using exponential attenuation to change learning rate can solve the above problems, and can bring the advantage of accuracy. The loss value is reduced from the original minimum of 1.2 to about 0.4, and the PSNR is increased from 45 to 50.
The parameters are set as follows
LEARNING_RATE_BASE = 0.001 # 最初学习率
LEARNING_RATE_DECAY = 0.99 # 学习率的衰减率
LEARNING_RATE_STEP = 100 # 喂入多少轮BATCH-SIZE以后,更新一次学习率。一般为总样本数量/BATCH_SIZE
gloabl_steps = tf.Variable(0, trainable=False) # 计数器,用来记录运行了几轮的BATCH_SIZE,初始为0,设置为不可训练
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE
, gloabl_steps,
LEARNING_RATE_STEP,
LEARNING_RATE_DECAY,
staircase=True)
学习率可以解决上述问题,并可以带来准确性的优势。损耗值从原来的最小值1.2减小到大约0.4,PSNR
请问在哪里修改学习率啊