Neural computation of arithmetic functions
Article Abstract:
Neural networks can be used to compute common arithmetic functions such as sorting and multiplication and have advantages over traditional logic circuits. A restricted class of neurons is preferable to the classical neuron model for computations. A shallow feedforward network of the restricted neurons can perform basic functions and complex functions such as division, rational functions and multiple products. Neural networks are advantageous for such operations because they have constant delays, while logic circuits require unbounded delays. The neural networks also improve upon previous results; they require four and five unit delays for sorting n n-bit numbers and producing the product of two n-bit numbers. Each threshold element in the shallow networks requires weights with lower accuracy.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1990
User Contributions:
Comment about this article or add new information about this topic:
Construction of the Voronoi Diagram for "one million" generators in single precision arithmetic
Article Abstract:
A sophisticated implementation of the incremental-type algorithm is the most practical algorithm for Voronoi diagram construction because it runs in O(n) time for n generators. With this approach, a diagram is constructed that shares some topological properties with the true Voronoi diagram. The algorithm's basic structure is designed only in terms of combinatorial computation; numerical results are used to select a structure of the Voronoi diagram that is more probable. The algorithm does not encounter topological inconsistency, but does its job and gives some output. Topological consistency is considered more fundamental than numerical results. The Voronoi Diagram and the incremental construction is discussed.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1992
User Contributions:
Comment about this article or add new information about this topic:
On the convergence properties of the Hopfield model
Article Abstract:
The Hopfield neural network model's convergence properties can be reduced to a simple case with an elementary proof. Three cases of convergence are known for the Hopfield model with an interconnection matrix W. The model converges to a stable state when W is symmetric and nodes are updated serially, to a cycle length of two or less when W is symmetrical and nodes are updated in parallel, and to a cycle length of four when W is anti-symmetric and nodes are updated in parallel. The parallel mode of updating is a special case of serial updating, so the Hopfield convergence can be proven without using an energy function.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1990
User Contributions:
Comment about this article or add new information about this topic:
- Abstracts: The evolution of electrical and electronics engineering and the Proceedings of the IRE: 1913-1937. The Faraday Bicentennial
- Abstracts: Analysis of heterogeneous electromagnetic scatterers: research progress of the past decade. Reciprocity, discretization, and the numerical solution of direct and inverse electromagnetic radiation and scattering problems
- Abstracts: Holographic implementation of a fully connected neural network. Imaging with reduced holographic data - an application of the AR model to a multifrequency hologram