| Neural Network Toolbox | ![]() |
Hebb with decay weight learning rule
Syntax
[dW,LS] = learnhd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
Description
learnhd is the Hebb weight learning function.
learnhd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W -- S x R weight matrix (or S x 1 bias vector)
P -- R x Q input vectors (or ones(1,Q))
Z -- S x Q weighted input vectors
T -- S x Q layer target vectors
E -- S x Q layer error vectors
gW -- S x R gradient with respect to performance
gA -- S x Q output gradient with respect to performance
Learning occurs according to learnhd's learning parameters shown here with default values.
learnhd(code) returns useful information for each code string:
Examples
Here we define a random input P, output A, and weights W for a layer with a two-element input and three neurons. We also define the decay and learning rates.
Since learnhd only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
Network Use
To prepare the weights and the bias of layer i of a custom network to learn with learnhd
net.trainFcn to 'trainr'. (net.trainParam will automatically become trainr's default parameters.)
net.adaptFcn to 'trains'. (net.adaptParam will automatically become trains's default parameters.)
net.inputWeights{i,j}.learnFcn to 'learnhd'. Set each net.layerWeights{i,j}.learnFcn to 'learnhd'. (Each weight learning parameter property will automatically be set to learnhd's default parameters.)
To train the network (or enable it to adapt)
Algorithm
learnhd calculates the weight change dW for a given neuron from the neuron's input P, output A, decay rate DR, and learning rate LR according to the Hebb with decay learning rule:
See Also
| learnh | learnis | ![]() |
© 1994-2005 The MathWorks, Inc.