2.1 An Abstraction
Imagine an ether-like medium in N dimensions
(assume N = 3 for the sake of discussion) the
physical properties of which sustain computing.
Admittedly bizarre, such a “continuum” would have
attributes of 1) data state, 2) information propagation,
and 3) the ability to modify the local state. The
degree (or amount) of any one of these characteristics
at a local site within this continuum is a product of its
density function and the bounded contiguous volume
over which it is integrated.
This is easy to understand for state capacity, but
is more difficult for the other two properties. The
amount of state that can be stored at a site in the
continuum is proportional to the local contiguous
volume of the storage. A property of the medium, the
storage density, determines the actual bit content of
the space being considered.
Ordinarily, the rate of movement of data is
described as bandwidth. In a continuum, data
movement must be treated as a vector. It has
direction but what of its magnitude? In mechanics it
would be its rate of traversal or perhaps its
momentum. For the computing continuum it will be
asserted that the distance covered in unit time along
the direction of the vector orientation is a constant
property of the computing medium. Instead, the
magnitude of the communication vector is the
product of the integral of the communication density
that is a property of the computing medium and the
normal area upon which the vector is incident. The
time for a communication vector to transit such a cut
is the ratio of this bandwidth and the capacity of the
vector which may be equated in conventional terms
to the total information content (measured in bits).
The maximum rate of state modification or peak
performance is similarly derived. This is the
maximum numbers of operations that can be
accomplished in unit time within a bounded
contiguous volume of the computing continuum. It is
proportional to this volume and a product of the
performance density coefficient property of the
medium.
Sustained performance, even over a short time
window and relatively small volume, is much more
complicated as it must be parallel algorithm-driven.
Actual computational actions, or operations, that
contribute to the sustained performance occur at any
single point (actually tiny volume) when a key
criterion is satisfied: we will call this the “condition
of coincidence”. Coincidence is the state of locality
in time and space of the logical (or abstract)
arguments, the physical resources, and the task
description information. All of these elements,
logical and physical, need to be at the same place at
the same time in order to enable a designated
operation to take place. As self-evident as such an
observation may appear, achieving a coincidence
event is the foundation of all computer architecture
which can be distinguished by the methods and
mechanisms employed to accomplish it.
CCA employs a message-driven strategy of
split-transaction execution. While the final semantic
definition of CCA will depend on specific details of a
given architecture, key aspects of the logical
operation can already be established. The semantics
of CCA differ somewhat from conventional
microprocessor architectures as they embody both the
2.1 An Abstraction
Imagine an ether-like medium in N dimensions
(assume N = 3 for the sake of discussion) the
physical properties of which sustain computing.
Admittedly bizarre, such a “continuum” would have
attributes of 1) data state, 2) information propagation,
and 3) the ability to modify the local state. The
degree (or amount) of any one of these characteristics
at a local site within this continuum is a product of its
density function and the bounded contiguous volume
over which it is integrated.
This is easy to understand for state capacity, but
is more difficult for the other two properties. The
amount of state that can be stored at a site in the
continuum is proportional to the local contiguous
volume of the storage. A property of the medium, the
storage density, determines the actual bit content of
the space being considered.
Ordinarily, the rate of movement of data is
described as bandwidth. In a continuum, data
movement must be treated as a vector. It has
direction but what of its magnitude? In mechanics it
would be its rate of traversal or perhaps its
momentum. For the computing continuum it will be
asserted that the distance covered in unit time along
the direction of the vector orientation is a constant
property of the computing medium. Instead, the
magnitude of the communication vector is the
product of the integral of the communication density
that is a property of the computing medium and the
normal area upon which the vector is incident. The
time for a communication vector to transit such a cut
is the ratio of this bandwidth and the capacity of the
vector which may be equated in conventional terms
to the total information content (measured in bits).
The maximum rate of state modification or peak
performance is similarly derived. This is the
maximum numbers of operations that can be
accomplished in unit time within a bounded
contiguous volume of the computing continuum. It is
proportional to this volume and a product of the
performance density coefficient property of the
medium.
Sustained performance, even over a short time
window and relatively small volume, is much more
complicated as it must be parallel algorithm-driven.
Actual computational actions, or operations, that
contribute to the sustained performance occur at any
single point (actually tiny volume) when a key
criterion is satisfied: we will call this the “condition
of coincidence”. Coincidence is the state of locality
in time and space of the logical (or abstract)
arguments, the physical resources, and the task
description information. All of these elements,
logical and physical, need to be at the same place at
the same time in order to enable a designated
operation to take place. As self-evident as such an
observation may appear, achieving a coincidence
event is the foundation of all computer architecture
which can be distinguished by the methods and
mechanisms employed to accomplish it.
CCA employs a message-driven strategy of
split-transaction execution. While the final semantic
definition of CCA will depend on specific details of a
given architecture, key aspects of the logical
operation can already be established. The semantics
of CCA differ somewhat from conventional
microprocessor architectures as they embody both the
การแปล กรุณารอสักครู่..
