Abstracts - faqs.org

Abstracts

Electronics

Search abstracts:
Abstracts » Electronics

Scientific modeling with massively parallel SIMD computers

Article Abstract:

Simulation is frequently the only way scientists can investigate certain phenomena. The Active Memory Technology distributed array processor (DAP) is particularly suitable to use in simulations that can be expressed as bit manipulation problems, but it is also suitable for real number calculations. A parallel computer is either a single instruction stream multiple data stream (SIMD), where all processors perform the same tasks synchronously but on different data, or a multiple instruction multiple data stream (MIMD), where both the tasks and data may differ. The SIMD computer is better for many problems in scientific programming that need to repeat the same operation a number of times on different data. The operation of the DAP, a massively parallel SIMD supercomputer with either 32 x 32 or 64 x 64 processing elements, is very straightforward. DAP performance is best when the state of subsystems can be represented by logical variables. A number of cellular automaton variables have been implemented on the DAP: sand on a table, a forest fire, and evolutionary modeling-CAfE (Cellular Automata for Evolution).

Author: Wilding, Nigel B., Trew, Arthur S., Hawick, Ken A., Pawley, G. Stuart
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1991
Cellular automata, Modeling, Data modeling software, Simulation, Distributed Processing, Scientific Research, MIMD

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


Reconfigurable SIMD massively parallel computers

Article Abstract:

Reconfigurable massively parallel computers are a new class of computers. Using the interconnection network's reconfigurability to establish a network topology well mapped to the algorithm communication graph, which achieves higher efficiency, is the distinguishing characteristic of this class of computer. Most computers of this class are primarily of SIMD architecture. Connection autonomy permits a SIMD system to individually and locally control the network's reconfiguration. Architectures of this class include Polymorphic-torus, Gated Connection Network, the CLIP series from University College London, reconfigurable bus architectures, and PAPIA2, a pyramid architecture. The time needed to propagate a signal is a critical factor in analyzing reconfigurable algorithms. Fault tolerance must be treated as part of the design in massively parallel processing since there is a high degree of probability that a fault will occur. Reconfiguration schemes for fault tolerance of massively parallel computers include row/column replacement for mesh network, localized diagonal replacement, and multitrack.

Author: Stout, Quentin F., Li, Hungwen
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1991
Research, Industrial research, Buses (Transportation), University of London. University College, Research and Development, Computer Design, Buses, Fault Tolerance, Fault tolerant computer systems, Reconfiguration, London, University of. University College

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


Mapping vision algorithms to parallel architectures

Article Abstract:

Programmers mapping vision algorithms to parallel processing machines will have to design those algorithms for specific parallel architectures because they vary so widely. There are several major parallel topologies, including mesh, hypercube, mesh-of-trees, pyramid, and the class of parallel random access machines (PRAMs), including ones with globally shared or distributed memories. Several strategies for mapping algorithms from one parallel environment to another are evaluated and compared, but none are generally successful for all architectures or applications. Major adverse impacts on the failure of such organizations to be a general optimal solution include the need for data reduction in vision processes and the varied communication capabilities of the various parallel architectures. Details of various programming and simulation strategies are described.

Author: Stout, Quentin F.
Publisher: Institute of Electrical and Electronics Engineers, Inc.
Publication Name: Proceedings of the IEEE
Subject: Electronics
ISSN: 0018-9219
Year: 1988
Computer vision, Processor architectures, Systems analysis, Image processing, Computer Systems, System Design, Processor Architecture, Machine Vision

User Contributions:

Comment about this article or add new information about this topic:

CAPTCHA


Subjects list: Arrays, technical, Massive Parallelism, SIMD, Algorithms, Algorithm, Parallel processing, Network architectures, Network Architecture
Similar abstracts:
  • Abstracts: National Science Foundation/Engineering Research Center for Emerging Cardiovascular Technologies. Measurement of defibrillation shock potential distribution and activation sequences of the heart in three dimensions
  • Abstracts: The evolution of electrical and electronics engineering and the Proceedings of the IRE: 1913-1937. The Faraday Bicentennial
  • Abstracts: Thin film deposition and microelectronic and optoelectronic device fabrication and characterization in monocrystalline alpha and beta silicon carbide
This website is not affiliated with document authors or copyright owners. This page is provided for informational purposes only. Unintentional errors are possible.
Some parts © 2025 Advameg, Inc.