Neural Network Toolbox |
Creating a Self-Organizing MAP Neural Network (newsom)
You can create a new SOFM network with the function newsom
. This function defines variables used in two phases of learning:
These values are used for training and adapting.
Consider the following example.
Suppose that we want to create a network having input vectors with two elements that fall in the range 0 to 2 and 0 to 1 respectively. Further suppose that we want to have six neurons in a hexagonal 2-by-3 network. The code to obtain this network is
Suppose also that the vectors to train on are
P = [.1 .3 1.2 1.1 1.8 1.7 .1 .3 1.2 1.1 1.8 1.7;... 0.2 0.1 0.3 0.1 0.3 0.2 1.8 1.8 1.9 1.9 1.7 1.8]
plot(P(1,:),P(2,:),'.g','markersize',20) hold on plotsom(net.iw{1,1},net.layers{1}.distances) hold off
The various training vectors are seen as fuzzy gray spots around the perimeter of this figure. The initialization for newsom
is midpoint
. Thus, the initial network neurons are all concentrated at the black spot at (1, 0.5).
When simulating a network, the negative distances between each neuron's weight vector and the input vector are calculated (negdist
) to get the weighted inputs. The weighted inputs are also the net inputs (netsum
). The net inputs compete (compete
) so that only the neuron with the most positive net input will output a 1.
Architecture | Training (learnsom) |
© 1994-2005 The MathWorks, Inc.