Neural Network Toolbox |
Syntax
Description
Linear layers are often used as adaptive filters for signal processing and prediction.
net = newlin
creates a new network with a dialog box.
newlin(PR,S,ID,LR)
takes these arguments,
PR
-- R
x 2
matrix of min and max values for R
input elements
S
-- Number of elements in the output vector
and returns a new linear layer.
net = newlin(PR,S,0,P)
takes an alternate argument,
and returns a linear layer with the maximum stable learning rate for learning with inputs P
.
Examples
This code creates a single input (range of [-1 1] linear layer with one neuron, input delays of 0 and 1, and a learning rate of 0.01. It is simulated for an input sequence P1
.
Here targets T1
are defined and the layer adapts to them. (Since this is the first call to adapt, the default input delay conditions are used.)
Here the linear layer continues to adapt for a new sequence using the previous final conditions PF
as initial conditions.
Here we initialize the layer's weights and biases to new values.
Here we train the newly initialized layer on the entire sequence for 200 epochs to an error goal of 0.1.
P3 = [P1 P2]; T3 = [T1 T2]; net.trainParam.epochs = 200; net.trainParam.goal = 0.1; net = train(net,P3,T3); Y = sim(net,[P1 P2])
Algorithm
Linear layers consist of a single layer with the dotprod
weight function, netsum
net input function, and purelin
transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with initzero
.
Adaption and training are done with trains
and trainb
, which both update weight and bias values with learnwh
. Performance is measured with mse
.
See Also
newlind
, sim
, init
, adapt
, train
, trains
, trainb
newhop | newlind |
© 1994-2005 The MathWorks, Inc.