Neural Network Toolbox Previous page   Next Page
learnp

Perceptron weight and bias learning function

Syntax

[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)

[db,LS] = learnp(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)

info = learnp(code)

Description

learnp is the perceptron weight/bias learning function.

learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,

and returns,

learnp(code) returns useful information for each code string:

Examples

Here we define a random input P and error E to a layer with a two-element input and three neurons.

Since learnp only needs these values to calculate a weight change (see algorithm below), we will use them to do so.

Network Use

You can create a standard network that uses learnp with newp.

To prepare the weights and the bias of layer i of a custom network to learn with learnp

  1. Set net.trainFcn to 'trainb'. (net.trainParam will automatically become trainb's default parameters.)
  2. Set net.adaptFcn to 'trains'. (net.adaptParam will automatically become trains's default parameters.)
  3. Set each net.inputWeights{i,j}.learnFcn to 'learnp'. Set each net.layerWeights{i,j}.learnFcn to 'learnp'. Set net.biases{i}.learnFcn to 'learnp'. (Each weight and bias learning parameter property will automatically become the empty matrix since learnp has no learning parameters.)

To train the network (or enable it to adapt)

  1. Set net.trainParam (net.adaptParam) properties to desired values.
  2. Call train (adapt).

See newp for adaption and training examples.

Algorithm

learnp calculates the weight change dW for a given neuron from the neuron's input P and error E according to the perceptron learning rule:

This can be summarized as:

See Also

References

Rosenblatt, F., Principles of Neurodynamics, Washington D.C.: Spartan Press, 1961.


Previous page  learnos learnpn Next page

© 1994-2005 The MathWorks, Inc.