Neural Network Toolbox |
Introduction
Recurrent networks is a topic of considerable interest. This chapter covers two recurrent networks: Elman, and Hopfield networks.
Elman networks are two-layer backpropagation networks, with the addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman networks to learn to recognize and generate temporal patterns, as well as spatial patterns. The best paper on the Elman network is:
Elman, J. L., "Finding structure in time," Cognitive Science, vol. 14, 1990, pp. 179-211.
The Hopfield network is used to store one or more stable target vectors. These stable vectors can be viewed as memories that the network recalls when provided with similar vectors that act as a cue to the network memory. You may want to pursue a basic paper in this field:
Li, J., A. N. Michel, and W. Porod, "Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube," IEEE Transactions on Circuits and Systems, vol. 36, no. 11, November 1989, pp. 1405-1422.
Recurrent Networks | Important Recurrent Network Functions |
© 1994-2005 The MathWorks, Inc.