Neural Network Toolbox |
Create a feed-forward input-delay backpropagation network
Syntax
net = newfftd(PR,ID,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
net = newfftd
creates a new network with a dialog box.
newfftd(PR,ID,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
takes,
PR
-- R
x 2
matrix of min and max values for R
input elements
Si
-- Size of ith layer, for Nl
layers
TFi
-- Transfer function of ith layer, default = 'tansig
'
BTF
-- Backprop network training function, default = 'traingdx
'
BLF
-- Backprop weight/bias learning function, default = 'learngdm
'
and returns an N
layer feed-forward backprop network.
The transfer functions TFi
can be any differentiable transfer function such as tansig
, logsig
, or purelin.
The training function BTF
can be any of the backprop training functions such as trainlm, trainbfg
, trainrp
, traingd
, etc.
net.trainParam.mem_reduc
to 2 or more. (See help
trainlm
.)
trainbfg
, which is slower but more memory-efficient than trainlm.
trainrp
, which is slower but more memory-efficient than trainbfg
.
The learning function BLF
can be either of the backpropagation learning functions such as learngd
or learngdm
.
The performance function can be any of the differentiable performance functions such as mse
or msereg
.
Examples
Here is a problem consisting of an input sequence P
and target sequence T
that can be solved by a network with one delay.
Here a two-layer feed-forward network is created with input delays of 0 and 1. The network's input ranges from [0 to 1]. The first layer has five tansig
neurons, the second layer has one purelin neuron. The trainlm network training function is to be used.
Here the network is simulated.
Here the network is trained for 50 epochs. Again the network's output is calculated.
Algorithm
Feed-forward networks consist of Nl
layers using the dotprod
weight function, netsum
net input function, and the specified transfer functions.
The first layer has weights coming from the input with the specified input delays. Each subsequent layer has a weight coming from the previous layer. All layers have biases. The last layer is the network output.
Each layer's weights and biases are initialized with initnw
.
Adaption is done with trains
, which updates weights with the specified learning function. Training is done with the specified training function. Performance is measured according to the specified performance function.
See Also
newcf
, newelm
, sim
, init
, adapt
, train
, trains
newff | newgrnn |
© 1994-2005 The MathWorks, Inc.