As computer technology has evolved, and as the cost of computer hardware has
dropped, computer designers have sought more and more opportunities for parallelism,usually to enhance performance and, in some cases, to increase availability. After an
overview, this chapter looks at some of the most prominent approaches to parallel organization.
First, we examine symmetric multiprocessors (SMPs), one of the earliest
and still the most common example of parallel organization. In an SMP organization,
multiple processors share a common memory. This organization raises the issue of
cache coherence, to which a separate section is devoted. Then we describe clusters,
which consist of multiple independent computers organized in a cooperative fashion.
Next, the chapter examines multithreaded processors and chip multiprocessors. Clusters
have become increasingly common to support workloads that are beyond the
capacity of a single SMP. Another approach to the use of multiple processors that we
examine is that of nonuniform memory access (NUMA) machines. The NUMA
approach is relatively new and not yet proven in the marketplace, but is often considered
as an alternative to the SMP or cluster approach. Finally, this chapter looks at
hardware organizational approaches to vector computation. These approaches optimize
the ALU for processing vectors or arrays of floating-point numbers. They are
common on the class of systems known as supercomputers.