Abstracts - faqs.org

Abstracts

Electronics

Search abstracts:
Abstracts » Electronics

Mapping neural nets onto a massively parallel architecture: a defect-tolerance solution

Article Abstract:

There are a number of approaches to highly parallel computer architectures. All these approaches have one characteristic: the algorithm is the backbone to the solution of any given problem. But a different approach is to train the machine, not rely on the algorithm. Neural nets can be mapped onto massively parallel architectures by using a solution based on regular array structures. Digital implementations of neural nets have three main problems: connectivity requirements, dimensions of weight storage, and structure of the processing units that evaluate sygmoid-like activation functions. A modified mesh-connected array can be adapted to emulate any neural net; permits a relevant degree of operation parallelism; permits easy adaption of mapping the neural net onto the array; permits design of a general-purpose, uncommitted neural architecture that can be customized; and can tolerate defects.

Author: Distante, Fausto, Sami, Mariagiovanna, Stefanelli, Renato, Storti-Gajani, Giancarlo
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1991
Neural networks, Architecture, Computer Learning, Arrays, Neural Network

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


The characterization and representation of massively parallel computing structures

Article Abstract:

It is difficult to graphically represent a computer that contains a million small computers, but it is necessary to do so to provide details of individual processing elements, describe interconnections, and delineate the function of control signals. A massively parallel operator should be thought of as an operator that transforms thousands of inputs at one time, not as a machine formed of thousands of individual operators. Schema representation can be used to describe digital computers by using a pair of schematic type diagrams, where one diagram shows the paths available for data flow and one diagram shows the order of operations needed to get correct results. A Petri net, which is a model of information flow, can be used to specify the operation of massively parallel computers.

Author: Schaefer, David H.
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1991
Petri nets, Rendering

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


Routing techniques for massively parallel communication

Article Abstract:

There are a number of packet-switching routing techniques for parallel computers. Each technique should be evaluated in terms of how it handles: network delay, deadlock freedom, livelock freedom, starvation freedom, injection competition, private-buffer recirculation, injection-token recirculation, and packet-injection control. The techniques are: random routing; routing based on combining messages; adaptive, minimal routing; multibutterflies; fully-adaptive routing;the chaos router; the exchange model; and routing by sorting.

Author: Sanz, Jorge L.C., Felperin, Sergio A., Gravano, Luis, Pifarre, Gustavo D.
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1991
Packet switches, Packet Switch, Routing

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


Subjects list: Industrial research, Algorithms, Algorithm, Research and Development, Computer Design, technical, Massive Parallelism
Similar abstracts:
  • Abstracts: Making GaAs integrated circuits. Nanostructure patterning
  • Abstracts: Using the multistage cube network topology in parallel supercomputers. Scattering from a perfectly conducting cube
  • Abstracts: CMAC: an associative neural network alternative to backpropagation. Backpropagation through time: what it does and how to do it
  • Abstracts: Neural computation of arithmetic functions. On the convergence properties of the Hopfield model. Construction of the Voronoi Diagram for "one million" generators in single precision arithmetic
  • Abstracts: Therapeutic and diagnostic application of lasers in ophthalmology. Tunable solid-state lasers
This website is not affiliated with document authors or copyright owners. This page is provided for informational purposes only. Unintentional errors are possible.
Some parts © 2025 Advameg, Inc.