Neural Network Toolbox Previous page   Next Page

LVQ1 Learning Rule (learnlv1)

LVQ learning in the competitive layer is based on a set of input/target pairs.

Each target vector has a single 1. The rest of its elements are 0. The 1 tells the proper classification of the associated input. For instance, consider the following training pair.

Here we have input vectors of three elements, and each input vector is to be assigned to one of four classes. The network is to be trained so that it classifies the input vector shown above into the third of four classes.

To train the network, an input vector p is presented, and the distance from p to each row of the input weight matrix IW1,1 is computed with the function ndist. The hidden neurons of layer 1 compete. Suppose that the ith element of n1 is most positive, and neuron i* wins the competition. Then the competitive transfer function produces a 1 as the i*th element of a1. All other elements of a1 are 0.

When a1 is multiplied by the layer 2 weights LW2,1, the single 1 in a1 selects the class, k* associated with the input. Thus, the network has assigned the input vector p to class k* and will be 1. Of course, this assignment may be a good one or a bad one, for may be 1 or 0, depending on whether the input belonged to class k* or not.

We adjust the i*th row of IW1,1 in such a way as to move this row closer to the input vector p if the assignment is correct, and to move the row away from p if the assignment is incorrect. So if p is classified correctly,

we compute the new value of the i*th row of IW1,1 as:

On the other hand, if p is classified incorrectly,

we compute the new value of the i*th row of IW1,1 as:

These corrections to the i*th row of IW1,1 can be made automatically without affecting other rows of IW1,1 by backpropagating the output errors back to layer 1.

Such corrections move the hidden neuron towards vectors that fall into the class for which it forms a subclass, and away from vectors that fall into other classes.

The learning function that implements these changes in the layer 1 weights in LVQ networks is learnlv1. It can be applied during training.


Previous page  Creating an LVQ Network (newlvq) Training Next page

© 1994-2005 The MathWorks, Inc.