Neural Network Toolbox |
Faster Training
The previous section presented two backpropagation training algorithms: gradient descent, and gradient descent with momentum. These two methods are often too slow for practical problems. In this section we discuss several high performance algorithms that can converge from ten to one hundred times faster than the algorithms discussed previously. All of the algorithms in this section operate in the batch mode and are invoked using train
.
These faster algorithms fall into two main categories. The first category uses heuristic techniques, which were developed from an analysis of the performance of the standard steepest descent algorithm. One heuristic modification is the momentum technique, which was presented in the previous section. This section discusses two more heuristic techniques: variable learning rate backpropagation, traingda
; and resilient backpropagation trainrp
.
The second category of fast algorithms uses standard numerical optimization techniques. (See Chapter 9 of [HDB96] for a review of basic numerical optimization.) Later in this section we present three types of numerical optimization techniques for neural network training: conjugate gradient (traincgf
, traincgp
, traincgb
, trainscg
), quasi-Newton (trainbfg
, trainoss
), and Levenberg-Marquardt (trainlm
).
Training | Variable Learning Rate (traingda, traingdx) |
© 1994-2005 The MathWorks, Inc.