The problem in making decisions under uncertainty is that the bulk of the information
we have about the possible outcomes, about the value of new information, about the
way the conditions change with time (dynamic), about the utility of each outcome–action
pair, and about our preferences for each action is typically vague, ambiguous, and otherwise
fuzzy. In some situations, the information may be robust enough so that we can
characterize it with probability theory.
In making informed and rational decisions, we must remember that individuals
are essentially risk averse. When the consequences of an action might result in serious injury, death, economic ruin, or some other dreaded event, humans do not make decisions
consistent with rational utility theory (Maes and Faber, 2004). In fact, studies in cognitive
psychology show that rationality is a rather weak hypothesis in decision making, easily
refuted and therefore not always useful as an axiomatic explanation of the theory of
decision making. Human risk preference in the face of high uncertainty is not easily
modeled by its rational methods. In a narrow context of decision making, rational behavior
is defined in terms of decision making, which maximizes expected utility (von Neumann
and Morgenstern, 1944). Of course, this utility is a function of personal preferences of
the decision maker. While much of the literature addresses decision making in the face
of economic and financial risks, engineers are primarily concerned with two types of
decisions (Maes and Faber, 2004): (1) operational decisions, where for certain available
resources an optimal action is sought to avoid a specific set of hazards; and (2) strategic
decisions, which involve decisions regarding one’s level of preparedness or anticipation
of events in the future. Difficulties in human preference reversal, in using incomplete
information, in bias toward one’s own experience, and in using epistemic uncertainty
(e.g., ambiguity, vagueness, and fuzziness) are among the various issues cited by Maes
and Faber (2004) as reasons why the independence axiom of an expected utility analysis
(used in rational decision making) is violated by human behavior. Although we do not
address these matters in this text, it is nonetheless important to keep them in mind when
using any of the methods developed here.