# AI
machine learning
Backpropagation using gradien descent
Increased Learning Rate → Increased network convergence
Can lead to undesired behavior at computing the gradient for neurons in the output layer
The error for each epochs is outside the boundaries of the performance goal
Risk: Training the network can stop at validation phase
The network training is influenced not only by the learning rate, but also by the number of hidden layers, by the chosen
error rate and by the number of epochs.
However, considering that the gradient is calculated taking into account the learning rate, it can be easily observed that
the choice of the learning rate leads to different results in the network training process