Neural Network Toolbox |
Simple Neuron
A neuron with a single scalar input and no bias appears on the left below.
The scalar input p is transmitted through a connection that multiplies its strength by the scalar weight w, to form the product wp, again a scalar. Here the weighted input wp is the only argument of the transfer function f, which produces the scalar output a. The neuron on the right has a scalar bias, b. You may view the bias as simply being added to the product wp as shown by the summing junction or as shifting the function f to the left by an amount b. The bias is much like a weight, except that it has a constant input of 1.
The transfer function net input n, again a scalar, is the sum of the weighted input wp and the bias b. This sum is the argument of the transfer function f. (Radial Basis Networks discusses a different way to form the net input n.) Here f is a transfer function, typically a step function or a sigmoid function, which takes the argument n and produces the output a. Examples of various transfer functions are given in the next section. Note that
w and b are both adjustable scalar parameters of the neuron. The central idea of neural networks is that such parameters can be adjusted so that the network exhibits some desired or interesting behavior. Thus, we can train the network to do a particular job by adjusting the weight or bias parameters, or perhaps the network itself will adjust these parameters to achieve some desired end.
All of the neurons in this toolbox have provision for a bias, and a bias is used in many of our examples and will be assumed in most of this toolbox. However, you may omit a bias in a neuron if you want.
As previously noted, the bias b is an adjustable (scalar) parameter of the neuron. It is not an input. However, the constant 1 that drives the bias is an input and must be treated as such when considering the linear dependence of input vectors in Linear Filters.
Neuron Model and Network Architectures | Transfer Functions |
© 1994-2005 The MathWorks, Inc.