Data flow paradigm of computation was popularized in 60’ and 70’ and describes non Von Neumann
architectures with the ability of fine grain parallelism in computation process.
In data flow architecture the flow of computation is not instructions flood driven. There is no concept of program counter implemented in this architecture. Control of computation is realized by data flood. Instruction is executed immediately in condition there are all operands of this instruction present. When executed, instruction produces output operands, which are input operands for other instructions.
Data flow paradigm of computing is using directed graph G = (V, E), called Data Flow Graph (DFG). DFG is used for the description of behavior of data driven computer. Vertex v _ V is an actor, a directed edge e _ E describes precedence relationships of source actor to target actor and is guarantee of proper execution of the data flow program. This assures proper order of
instructions execution with contemporaneous parallel execution of instructions. Tokens are used to indicate presence of data in DFG.
Actor in data flow program can be executed only in case there is a presence of a requisite number of data values (tokens) on input edges of an actor. When firing an actor execution, the defined number of tokens from input edges is consumed and defined number of tokens is produced to the output edges. An important characteristic of data flow program is its ability to detect parallelism of computation. This detection is allowed on the lowermost basis – on the machine instructions level.
There are static, dynamic and also hybrid data flow computing models.
In static model, there is possibility to place only one token on the edge at the same time. When firing an actor, no token is allowed on the output edge of an actor.
Disadvantage of the static model is in impossibility to use dynamic forms of parallelism, such a loops and recursive parallelism. Computer with static data flow architecture was first introduced by Dennis and Misunas in 1974 [8].
Dynamic model of data flow computer architecture allows placing of more than one token on the edge at the same time. To allow implementation of this feature of the architecture, the tagging of tokens was established.
Each token is tagged and the tag identifies conceptual position of token in the token flood.
For firing an actor execution, a condition must be fulfilled that on each input edge of an actor the token with the same tag must be identified. After firing of an actor, those tokens are consumed and predefined amount of tokens is produced to the output edges of an actor.
There is no condition for firing an actor that no tokens must be placed on output edge of an actor. The architecture of dynamic data flow computer was first introduced at Massachusetts Institute of technology (MIT) as a Tagged Token Data flow Architecture [9].
Hybrid data flow architecture is a combination of control flow and data flow computation control
mechanisms.
Data flow computing is predominantly domain of the research laboratories and scientific institutions, and has limited impact on commercial computing because of difficulties in cost of communication, organization of computation, manipulation with structured data and cost of matching [10][11].
Paradigm of tile computing in combination with data flow computing brings new possibilities to overcome some of deficiencies of data flow architectures.