Neural Network Toolbox |
Normalized perceptron weight and bias learning function
Syntax
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
Description
learnpn
is a weight and bias learning function. It can result in faster learning than learnp
when input vectors have widely varying magnitudes.
learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W
-- S x R weight matrix (or S x 1 bias vector)
P
-- R x Q input vectors (or ones(1,Q))
Z
-- S x Q weighted input vectors
T
-- S x Q layer target vectors
E
-- S x Q layer error vectors
gW
-- S x R weight gradient with respect to performance
gA
-- S x Q output gradient with respect to performance
learnpn(code)
returns useful information for each code
string:
Examples
Here we define a random input P
and error E
to a layer with a two-element input and three neurons.
Since learnpn
only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
Network Use
You can create a standard network that uses learnpn
with newp.
To prepare the weights and the bias of layer i
of a custom network to learn with learnpn
net.trainFcn
to 'trainb
'. (net.trainParam
will automatically become trainb
's default parameters.)
net.adaptFcn
to 'trains
'. (net.adaptParam
will automatically become trains
's default parameters.)
net.inputWeights{i,j}.learnFcn
to 'learnpn
'. Set each net.layerWeights{i,j}.learnFcn
to 'learnpn
'. Set net.biases{i}.learnFcn
to 'learnpn
'. (Each weight and bias learning parameter property will automatically become the empty matrix since learnpn
has no learning parameters.)
To train the network (or enable it to adapt):
See newp
for adaption and training examples.
Algorithm
learnpn
calculates the weight change dW
for a given neuron from the neuron's input P
and error E
according to the normalized perceptron learning rule
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2) dw = 0, if e = 0 = pn', if e = 1 = -pn', if e = -1
The expression for dW
can be summarized as:
Limitations
Perceptrons do have one real limitation. The set of input vectors must be linearly separable if a solution is to be found. That is, if the input vectors with targets of 1 cannot be separated by a line or hyperplane from the input vectors associated with values of 0, the perceptron will never be able to classify them correctly.
See Also
learnp | learnsom |
© 1994-2005 The MathWorks, Inc.