Neural Network Toolbox Previous page   Next Page

Creating an LVQ Network (newlvq)

An LVQ network can be created with the function newlvq

where:

Suppose we have 10 input vectors. We create a network that assigns each of these input vectors to one of four subclasses. Thus, we have four neurons in the first competitive layer. These subclasses are then assigned to one of two output classes by the two neurons in layer 2. The input vectors and targets are specified by

and

It may help to show the details of what we get from these two lines of code.

A plot of the input vectors follows.

As you can see, there are four subclasses of input vectors. We want a network that classifies p1, p2, p3, p8, p9, and p10 to produce an output of 1, and that classifies vectors p4, p5, p6 and p7 to produce an output of 2. Note that this problem is nonlinearly separable, and so cannot be solved by a perceptron, but an LVQ network has no difficulty.

Next we convert the Tc matrix to target vectors.

This gives a sparse matrix T that can be displayed in full with

which gives

This looks right. It says, for instance, that if we have the first column of P as input, we should get the first column of targets as an output; and that output says the input falls in class 1, which is correct. Now we are ready to call newlvq.

We call newlvq with the proper arguments so that it creates a network with four neurons in the first layer and two neurons in the second layer. The first-layer weights are initialized to the center of the input ranges with the function midpoint. The second-layer weights have 60% (6 of the 10 in Tc above) of its columns with a 1 in the first row, (corresponding to class 1), and 40% of its columns will have a 1 in the second row (corresponding to class 2).

We can check to see the initial values of the first-layer weight matrix.

These zero weights are indeed the values at the midpoint of the range (-3 to +3) of the inputs, as we would expect when using midpoint for initialization.

We can look at the second-layer weights with

This makes sense too. It says that if the competitive layer produces a 1 as the first or second element. The input vector is classified as class 1; otherwise it is a class 2.

You may notice that the first two competitive neurons are connected to the first linear neuron (with weights of 1), while the second two competitive neurons are connected to the second linear neuron. All other weights between the competitive neurons and linear neurons have values of 0. Thus, each of the two target classes (the linear neurons) is, in fact, the union of two subclasses (the competitive neurons).

We can simulate the network with sim. We use the original P matrix as input just to see what we get.

The network classifies all inputs into class 1. Since tis not what we want, we have to train the network (adjusting the weights of layer 1 only), before we can expect a good result. First we discuss two LVQ learning rules, and then we look at the training process.


Previous page  Learning Vector Quantization Networks LVQ1 Learning Rule (learnlv1) Next page

© 1994-2005 The MathWorks, Inc.