Neural Network Toolbox Previous page   Next Page

Introduction

The linear networks discussed in this chapter are similar to the perceptron, but their transfer function is linear rather than hard-limiting. This allows their outputs to take on any value, whereas the perceptron output is limited to either 0 or 1. Linear networks, like the perceptron, can only solve linearly separable problems.

Here we will design a linear network that, when presented with a set of given input vectors, produces outputs of corresponding target vectors. For each input vector we can calculate the network's output vector. The difference between an output vector and its target vector is the error. We would like to find values for the network weights and biases such that the sum of the squares of the errors is minimized or below a specific value. This problem is manageable because linear systems have a single error minimum. In most cases, we can calculate a linear network directly, such that its error is a minimum for the given input vectors and targets vectors. In other cases, numerical problems prohibit direct calculation. Fortunately, we can always train the network to have a minimum error by using the Least Mean Squares (Widrow-Hoff) algorithm.

Note that the use of linear filters in adaptive systems is discussed in Chapter 10.

This chapter introduces newlin, a function that creates a linear layer, and newlind, a function that designs a linear layer for a specific purpose.

You can type help linnet to see a list of linear network functions, demonstrations, and applications.


Previous page  Linear Filters Neuron Model Next page

© 1994-2005 The MathWorks, Inc.