| Neural Network Toolbox |    | 
Self-organizing map weight learning function
Syntax
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
Description
learnsom is the self-organizing map weight learning function.
learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W   -- S x R weight matrix (or S x 1 bias vector)
P   -- R x Q input vectors (or ones(1,Q))
Z   -- S x Q weighted input vectors
T   -- S x Q layer target vectors
E   -- S x Q layer error vectors
gW -- S x R weight gradient with respect to performance
gA -- S x Q output gradient with respect to performance
Learning occurs according to learnsom's learning parameter, shown here with its default value.
LP.order_lr       0.9   Ordering phase learning rate.
LP.order_steps   1000   Ordering phase steps.
learnpn(code) returns useful information for each code string:
'pnames'      -- Names of learning parameters
'pdefaults' -- Default learning parameters
'needg'        -- Returns 1 if this function uses gW or gA
Examples
Here we define a random input P, output A, and weight matrix W, for a layer with a two-element input and six neurons. We also calculate positions and distances for the neurons, which are arranged in a 2-by-3 hexagonal pattern. Then we define the four learning parameters.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp.order_lr = 0.9; lp.order_steps = 1000; lp.tune_lr = 0.02; lp.tune_nd = 1;
Since learnsom only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
Network Use
You can create a standard network that uses learnsom with newsom.
net.trainFcn to 'trainr'. (net.trainParam will automatically become trainr's default parameters.)
net.adaptFcn to 'trains'. (net.adaptParam will automatically become trains's default parameters.)
net.inputWeights{i,j}.learnFcn to 'learnsom'. Set each net.layerWeights{i,j}.learnFcn to 'learnsom'. Set net.biases{i}.learnFcn to 'learnsom'. (Each weight learning parameter property will automatically be set to learnsom's default parameters.)
To train the network (or enable it to adapt)
Algorithm
learnsom calculates the weight change dW for a given neuron from the neuron's input P, activation A2, and learning rate LR:
where the activation A2 is found from the layer output A and neuron distances D and the current neighborhood size ND:
The learning rate LR and neighborhood size NS are altered through two phases: an ordering phase and a tuning phase.
The ordering phases lasts as many steps as LP.order_steps. During this phase LR is adjusted from LP.order_lr down to LP.tune_lr, and ND is adjusted from the maximum neuron distance down to 1. It is during this phase that neuron weights are expected to order themselves in the input space consistent with the associated neuron positions.
During the tuning phase LR decreases slowly from LP.tune_lr and ND is always set to LP.tune_nd. During this phase the weights are expected to spread out relatively evenly over the input space while retaining their topological order found during the ordering phase.
See Also
|   | learnpn | learnwh |  | 
© 1994-2005 The MathWorks, Inc.