Neural Network Toolbox |
Variable Learning Rate (traingda, traingdx)
With standard steepest descent, the learning rate is held constant throughout training. The performance of the algorithm is very sensitive to the proper setting of the learning rate. If the learning rate is set too high, the algorithm may oscillate and become unstable. If the learning rate is too small, the algorithm will take too long to converge. It is not practical to determine the optimal setting for the learning rate before training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across the performance surface.
The performance of the steepest descent algorithm can be improved if we allow the learning rate to change during the training process. An adaptive learning rate will attempt to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface.
An adaptive learning rate requires some changes in the training procedure used by traingd
. First, the initial network output and error are calculated. At each epoch new weights and biases are calculated using the current learning rate. New outputs and errors are then calculated.
As with momentum, if the new error exceeds the old error by more than a predefined ratio max_perf_inc
(typically 1.04), the new weights and biases are discarded. In addition, the learning rate is decreased (typically by multiplying by lr_dec
= 0.7). Otherwise, the new weights, etc., are kept. If the new error is less than the old error, the learning rate is increased (typically by multiplying by lr_inc
= 1.05).
This procedure increases the learning rate, but only to the extent that the network can learn without large error increases. Thus, a near-optimal learning rate is obtained for the local terrain. When a larger learning rate could result in stable learning, the learning rate is increased. When the learning rate is too high to guarantee a decrease in error, it gets decreased until stable learning resumes.
Try the Neural Network Design Demonstration nnd12vl
[HDB96] for an illustration of the performance of the variable learning rate algorithm.
Backpropagation training with an adaptive learning rate is implemented with the function traingda
, which is called just like traingd
, except for the additional training parameters max_perf_inc
, lr_dec
, and lr_inc
. Here is how it is called to train our previous two-layer network:
p = [-1 -1 2 2;0 5 0 5]; t = [-1 -1 1 1]; net=newff(minmax(p),[3,1],{'tansig','purelin'},'traingda'); net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.lr_inc = 1.05; net.trainParam.epochs = 300; net.trainParam.goal = 1e-5; [net,tr]=train(net,p,t); TRAINGDA, Epoch 0/300, MSE 1.71149/1e-05, Gradient 2.6397/1e-06 TRAINGDA, Epoch 44/300, MSE 7.47952e-06/1e-05, Gradient 0.00251265/1e-06 TRAINGDA, Performance goal met. a = sim(net,p) a = -1.0036 -0.9960 1.0008 0.9991
The function traingdx
combines adaptive learning rate with momentum training. It is invoked in the same way as traingda
, except that it has the momentum coefficient mc
as an additional training parameter.
Faster Training | Resilient Backpropagation (trainrp) |
© 1994-2005 The MathWorks, Inc.