CHAPTER I
INTRODUCTION
Communication is ubiquitous and vital to our lives. It is also difficult to manage
correctly. Protocols are established and when followed permit some subset of possible
messages to be transmitted and hopefully received. When introduced to someone that we
don’t know we smile, possibly shake hands and say how glad we are to know that person.
A simple protocol but what set of messages can it convey? Not a large set certainly. As
the world leans more and more on electronic communication the set of messages that
need to be communicated across computer networks is growing. The protocols will need
to grow as well. But what electronic protocols need to be created? What will they
accomplish? Development of communication protocols is complicated and risky.
One way to mitigate the risk is to run simulations of the proposed candidate
protocols. This eliminates the need for an actual physical system. Rather, a model of the
system is employed. For the simulations to be meaningful the model must be precise,
even mathematical [1]. Once a precise model is specified, however, simulation provides
tangible benefits over working with real world systems [2].
• Time can be modeled. Working with real systems means that real time limits the
investigation. In a simulation, time itself can be modeled and expanded or compressed.
• Sources of variation can be controlled. These sources of variation must be
explicitly defined and it’s possible to limit these to the sources of interest.
2
• In real world systems measurement error is inevitable. In a simulation this is not a
concern since real world measurements are not being made. It is possible to stop and review a
simulation. All components in the simulation are frozen and a snapshot of global state can be
obtained and restored at any time. The value of simulation is indisputable when developing new
network protocols.
General purpose protocols need to be layered. The OSI model [3] defines seven layers,
while TCP/IP [4] defines five. Where would the internet be today if TCP were the only socket
interface available for network programmers? Certainly it’s the most used transport by programs
that communicate over the internet but the expense of connection setup and tear down sometimes
makes it a sub-optimal choice, especially when a reliable transport isn’t necessary. UDP serves a
different crowd.
So, not only is it necessary to simulate, but it’s necessary to simulate at different layers,
which means that it is necessary to reason about those layers independently and then be able to
combine them afterwards. More generally, it is advantageous to reason about a system at various
levels of abstraction in a way that permits conclusions to be drawn at each level. Even more
advantageous is being able to follow that up with a composition of those components, drawing
conclusions on the composed system based on the conclusions about the components.
Motivation for this work grew out of an experience with the use of a proprietary product
to develop a network simulation for wireless networks. The product was fine, well suited for the
task at hand. Issues came up however with license renewals and the inability to share the work
that was being done with anyone who did not have a license for the product. Also, it required a
“try it and see” approach. There was no discernable way to specify ahead of time within the tool
set what the system was supposed to do. Rather, it was necessary to implement it first and see
what it did.
3
The concepts of readily accessible, and end to end tools permeates this effort. The
approach being advocated is that the problem be understood and rigorously specified, a solution
attempted and then the results verified against expectations. So the challenge is to find the right
set of tools for the specification, implementation and verification phases; a set of tools that meet
the readily accessible and end to end goal.
The specification tools while being rigorous need to be understandable without special
training. They need to be simple yet powerful enough to specify complex systems. They must
provide facilities for composition so that behavior of network layers can be specified individually
and yet connected together to specify a complete protocol stack. The tools for specification
should generate output that can be consumed and utilized by the implementation phase of the
development. That is, there should be a link between the specification and implementation
phases.
The implementation tools must also be simple yet powerful. They must be capable of
implementing complex, composed system specifications while enforcing a separation of
concerns that expresses that specification more simply as an executable composition of its
components. The implementations must generate output that can feed the verification phase.
Thus, the tools must be amenable to capturing runtime behavior that can be determined