When one reads the literature on software based system
safety, e.g., Leveson’s book [3] and other papers
[5, 4], he or she is struck by the level of detail of the models that must be used to do the hazard analysis. Even
though these models are called blackbox models because
they do not show so-called implementation details, they
do show for each stimulus from the environment thought
to be relevant, a transition in the state model from any
state. This state model is intended to capture the user’s
mental model of the external behavior of the system [4].
Note that only the stimuli thought to be relevant are
handled by the blackbox model. One reason for modifying
the model is the discovery of another stimulus or
another group of simultaneous stimuli, possibly previously
known, that is relevant to the safety of the system.
Such detail is necessary to be able to carry out any
useful state machine hazard analysis (SMHA), whether by
forward search from possible initial states and stimuli or
by backward search from known hazards [5].
The software-based system safety community has
come to regard such blackbox models as requirement
level models simply because it has no choice. Without
this level of detail, the hazard conditions are simply
invisible, having been abstracted away into states and
transitions in which the conditions and sequences of
events that lead to accidents are not expressible. In a normal
non-safety-critical system, such a model would be
called a high-level design or simply a design. Put in terms
of Leveson’s intent specifications [4], most would consider
as requirements only Section 1, System Purpose;
most would consider as design Sections 2, Design Principles,
3, Blackbox Behavior, and 4, Physical and Logical Function; and most would consider as implementation
documentation Section 5, Physical Realization