Neural Network Toolbox |
Bayesian regularization backpropagation
Syntax
[net,TR,Ac,El] = trainbr(net,Pd,Tl,Ai,Q,TS,VV,TV)
Description
trainbr
is a network training function that updates the weight and bias values according to Levenberg-Marquardt optimization. It minimizes a combination of squared errors and weights, and then determines the correct combination so as to produce a network that generalizes well. The process is called Bayesian regularization.
trainbr(net,Pd,Tl,Ai,Q,TS,VV,TV)
takes these inputs,
Ai
-- Initial input delay conditions
Q
-- Batch size
VV
-- Either empty matrix []
or structure of validation vectors
TR
-- Training record of various values over each epoch:
Training occurs according to the trainlm
's training parameters, shown here with their default values:
net.trainParam.epochs 100
Maximum number of epochs to train
net.trainParam.goal 0
Performance goal
net.trainParam.mu 0.005
Marquardt adjustment parameter
net.trainParam.mu_dec 0.1
Decrease factor for mu
net.trainParam.mu_inc 10
Increase factor for mu
net.trainParam.mu_max 1e-10
Maximum value for mu
net.trainParam.max_fail 5
Maximum validation failures
Factor to use for memory/speed trade-off
net.trainParam.min_grad 1e-10
Minimum performance gradient
Dimensions for these variables are:
Pd
-- No
x Ni
x TS
cell array, each element P{i,j,ts}
is a Dij
x Q
matrix
Tl
-- Nl
x TS
cell array, each element P{i,ts}
is a Vi
x Q
matrix
Ai
-- Nl
x LD
cell array, each element Ai{i,k}
is an Si
x Q
matrix
If VV
is not []
, it must be a structure of validation vectors,
VV.PD
-- Validation delayed inputs
VV.Tl
-- Validation layer targets
which is normally used to stop training early if the network performance on the validation vectors fails to improve or remains the same for max_fail
epochs in a row.
If TV
is not [], it must be a structure of validation vectors,
TV.PD
-- Validation delayed inputs
TV.Tl
-- Validation layer targets
which is used to test the generalization capability of the trained network.
trainbr(code)
returns useful information for each code
string:
Examples
Here is a problem consisting of inputs p and targets t that we would like to solve with a network. It involves fitting a noisy sine wave.
Here a two-layer feed-forward network is created. The network's input ranges from [-1 to 1]. The first layer has 20 tansig neurons, the second layer has one purelin neuron. The trainbr network training function is to be used. The plot of the resulting network output should show a smooth response, without overfitting.
net=newff([-1 1],[20,1],{'tansig','purelin'},'trainbr'); net.trainParam.epochs = 50; net.trainParam.show = 10; net = train(net,p,t); a = sim(net,p) plot(p,a,p,t,'+')
Network Use
You can create a standard network that uses trainbr
with newff
, newcf
, or newelm
.
To prepare a custom network to be trained with trainbr
net.trainFcn
to 'trainlm
'. This will set net.trainParam
to trainbr
's default parameters.
net.trainParam
properties to desired values.
In either case, calling train with the resulting network will train the network with trainbr
.
See newff
,
newcf
, and newelm
for examples.
Algorithm
trainbr
can train any network as long as its weight, net input, and transfer functions have derivative functions.
Bayesian regularization minimizes a linear combination of squared errors and weights. It also modifies the linear combination so that at the end of training the resulting network has good generalization qualities. See MacKay (Neural Computation) and Foresee and Hagan (Proceedings of the International Joint Conference on Neural Networks) for more detailed discussions of Bayesian regularization.
This Bayesian regularization takes place within the Levenberg-Marquardt algorithm. Backpropagation is used to calculate the Jacobian jX
of performance perf
with respect to the weight and bias variables X
. Each variable is adjusted according to Levenberg-Marquardt,
where E
is all errors and I
is the identity matrix.
The adaptive value mu
is increased by mu_inc
until the change shown above results in a reduced performance value. The change is then made to the network and mu
is decreased by mu_dec
.
The parameter mem_reduc
indicates how to use memory and speed to calculate the Jacobian jX
. If mem_reduc
is 1, then trainlm
runs the fastest, but can require a lot of memory. Increasing mem_reduc
to 2 cuts some of the memory required by a factor of two, but slows trainlm
somewhat. Higher values continue to decrease the amount of memory needed and increase the training times.
Training stops when any one of these conditions occurs:
epochs
(repetitions) is reached.
time
has been exceeded.
goal
.
mingrad
.
mu
exceeds mu_max
.
max_fail
times since the last time it decreased (when using validation).
See Also
newff
,
newcf
,
traingdm
,
traingda
,
traingdx
,
trainlm
,
trainrp
,
traincgf
,
traincgb
,
trainscg
,
traincgp
,
trainoss
References
Foresee, F. D., and M. T. Hagan, "Gauss-Newton approximation to Bayesian regularization," Proceedings of the 1997 International Joint Conference on Neural Networks, 1997.
MacKay, D. J. C., "Bayesian interpolation," Neural Computation, vol. 4, no. 3, pp. 415-447, 1992.
trainbfg | trainc |
© 1994-2005 The MathWorks, Inc.