Neural Network Toolbox |
Random order incremental training with learning functions.
Syntax
[net,TR,Ac,El] = trainr(net,Pd,Tl,Ai,Q,TS,VV,TV)
Description
trainr
is not called directly. Instead it is called by train
for networks whose net.trainFcn
property is set to 'trainr'
.
trainr
trains a network with weight and bias learning rules with incremental updates after each presentation of an input. Inputs are presented in random order.
trainr(net,Pd,Tl,Ai,Q,TS,VV)
takes these inputs,
Training occurs according to trainr
's training parameters shown here with their default values:
net.trainParam.epochs 100
Maximum number of epochs to train
net.trainParam.goal 0
Performance goal
net.trainParam.show 25
Epochs between displays (NaN
for no
displays)
Dimensions for these variables are:
Pd
-- No x Ni x TS
cell array, each element Pd{i,j,ts}
is a Dij x Q
matrix.
Tl
-- Nl x TS
cell array, each element P{i,ts}
is a Vi x Q
matrix or [].
Ai
-- Nl x LD
cell array, each element Ai{i,k}
is an Si x Q
matrix.
trainr
does not implement validation or test vectors, so arguments VV
and TV
are ignored.
trainr(code)
returns useful information for each code
string:
Network Use
You can create a standard network that uses trainr
by calling newc
or newsom
.
To prepare a custom network to be trained with trainr
net.trainFcn
to 'trainr'
.
net.inputWeights{i,j}.learnFcn
to a learning function.
net.layerWeights{i,j}.learnFcn
to a learning function.
net.biases{i}.learnFcn
to a learning function. (Weight and bias learning parameters will automatically be set to default values for the given learning function.)
net.trainParam
properties to desired values.
train
.
See newc
and newsom
for training examples.
Algorithm
For each epoch, all training vectors (or sequences) are each presented once in a different random order with the network and weight and bias values updated accordingly after each individual presentation.
Training stops when any of these conditions are met:
epochs
(repetitions) is reached.
goal
.
time
has been exceeded.
See Also
trainoss | trainrp |
© 1994-2005 The MathWorks, Inc.