accepting some hypotheses and rejecting others. Hypotheses and evidence are related to each other by
both positive constraints that concern how they fit together, and negative constraints between
representations that resist fitting together. The most important kind of positive constraint is that when
a hypothesis explains a piece of evidence, they cohere with each other. For example, the hypothesis
that Simpson killed Nicole fits with the evidence that Nicole is dead because the killing causally
explains the death. The most important kind of negative constraint is between hypotheses that
contradict each other or that compete more loosely to explain some piece of evidence. For example,
the hypothesis that Simpson killed Nicole competes with the defense's hypothesis that she was killed
by drug dealers.
To see how this might work in the brain, begin with a highly simplified view of elements such as
hypotheses and evidence as represented by single neuronlike units rather than by patterns of activation
in neural populations. We can then build an artificial neural network that represents constraints among
elements by links between the units that stand for them. Figure 4.2 shows a simple network that has
units representing competing hypotheses in the Simpson case. Positive constraints based on what
explains what are captured by excitatory links between units, roughly analogous to the synaptic
connections that enable one neuron to excite another. Note that figure 4.2 allows levels of explanatory
hypotheses, with the hypothesis that Simpson was angry at his ex-wife explaining why he killed her,
which explains why she is dead. Negative constraints are captured by inhibitory links between units.
Another positive constraint that affects the network is that we should tend to accept what we have
observed with our senses, in this case that Nicole is dead.
In order to figure out the best explanation of the evidence, we need to figure out how to maximize
satisfaction of positive and negative constraints, where we satisfy a positive constraint between
elements by accepting both of them, and a negative constraint by rejecting one and accepting the other.
Fortunately, there are various computational algorithms available for maximizing constraint
satisfaction. The most psychologically natural one uses a number called activation to represent the
high or low acceptability of a unit, where activation is roughly analogous to the firing rate of a
neuron. Then we can use simple algorithms to spread activation in parallel among the units in a
network until some are accepted and others are rejected. For example, when activation is spread
among all the units in the network in figure 4.2, the result is that the unit for the hypothesis that
Simpson is a murderer gets activated, and the competing unit about drug dealers gets deactivated. In
this way, a highly simplified neural network can make a complex coherence judgment using parallel
constraint satisfaction. This method of maximizing explanatory coherence has been used to model a
great many examples from law and science, including the theory revisions that occurred in the major
scientific revolutions wrought by Copernicus and Einstein.