30 years of adaptive neural networks: Perceptron, Madeline, and backpropagation
Article Abstract:
Fundamental developments in feedforward artificial neural networks from the past thirty years are reviewed. A description of the history, origination, operating characteristics, and basica theory of several supervised neural network training algorithms including the Perceptron rule, the LMS algorithm, three Madeline rules, and the backpropagation technique are presented. These methods were developed independently, but with the perspective of history they can all be related to each other. The concept underlying these algorithms is the 'minimal disturbance principle,' which suggests that during training it is advisable to inject new information into a network in a manner that disturbs stored information to the smallest extent possible. (Reprinted by permission of the publisher.)
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1990
User Contributions:
Comment about this article or add new information about this topic:
CMAC: an associative neural network alternative to backpropagation
Article Abstract:
The Cerebellar Model Arithmetic Computer (CMAC) neural network is an alternative to backpropagated multilayer networks that can be quickly trained, is easily created in hardware, is able to learn many nonlinear functions and uses local generalization. Because CMAC does not use global generalization, it is fast and not significantly influenced by learning interference, but it requires care in design to avoid interference from hash coding. CMAC can also learn solutions with unacceptable error levels for particular applications if it is improperly designed. CMAC can be used in applications such as robot control, pattern recognition and signal processing.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1990
User Contributions:
Comment about this article or add new information about this topic:
Convergence properties and stationary points of a Perceptron learning algorithm
Article Abstract:
If a Gaussian random vector is input into a Perceptron, the algorithm's stationary points are not unique and the step size mu and the momentum constant alpha determine the algorithm's behavior near convergence. A Perceptron is an adaptive linear neuron that produces one of two discrete values and is used as the basis for multilayer, feedforward neural networks. The least-mean-square adaptive algorithm is used to train a single-layer Perceptron by adjusting internal weights.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1990
User Contributions:
Comment about this article or add new information about this topic:
- Abstracts: A comparison of homogeneous hierarchical interconnection structures. The Special Issue on Open Systems Interconnection (OSI)- New International Standards Architecture and Protocols
- Abstracts: Services and Protocols of the Data Link Layer. OSI Session Layer: Services and Protocols. Services and Protocols of the Physical Layer
- Abstracts: Implementation of digital filtering algorithms using pipelined vector processors. Technological Design Considerations for Monolithic MOS Switched-Capacitor Filtering Systems
- Abstracts: Implementation of digital filtering algorithms using pipelined vector processors. part 2 The Impact of Vector Processors on Petroleum Reservoir Simulation
- Abstracts: Statistical control of VLSI fabrication processes. Modeling and simulation of VLSI digital systems. Analysis technology for VLSI fabrication