Neural Network Toolbox Previous page   Next Page

Transfer Functions

Many transfer functions are included in this toolbox. A complete list of them can be found in Transfer Function Graphs. Three of the most commonly used functions are shown below.

The hard-limit transfer function shown above limits the output of the neuron to either 0, if the net input argument n is less than 0; or 1, if n is greater than or equal to 0. We will use this function in Chapter 3 "Perceptrons to create neurons that make classification decisions.

The toolbox has a function, hardlim, to realize the mathematical hard-limit transfer function shown above. Try the code shown below.

It produces a plot of the function hardlim over the range -5 to +5.

All of the mathematical transfer functions in the toolbox can be realized with a function having the same name.

The linear transfer function is shown below.

Neurons of this type are used as linear approximators in Linear Filters.

The sigmoid transfer function shown below takes the input, which may have any value between plus and minus infinity, and squashes the output into the range 0 to 1.

This transfer function is commonly used in backpropagation networks, in part because it is differentiable.

The symbol in the square to the right of each transfer function graph shown above represents the associated transfer function. These icons will replace the general f in the boxes of network diagrams to show the particular transfer function being used.

For a complete listing of transfer functions and their icons, see the Transfer Function Graphs. You can also specify your own transfer functions. You are not limited to the transfer functions listed in Reference.

You can experiment with a simple neuron and various transfer functions by running the demonstration program nnd2n1.


Previous page  Neuron Model Neuron with Vector Input Next page

© 1994-2005 The MathWorks, Inc.