Coherence in the Brain
In discussing perception, I contrasted the step-by-step, language-based inferences that occur in
speaking or writing with the parallel, often non-linguistic kinds of inference performed by brains.
Because people argue about the best explanation of crimes or scientific experiments, it seems at first
that inferences to hypotheses are made serially and linguistically. But the brain does it differently,
with multimodal representations of hypotheses and causality, and with parallel assessments of
coherence.
As common sense suggests, language is an important part of how minds represent hypotheses, but
other sensory modalities can also contribute. You can represent your conjecture that O. J. Simpson
killed his ex-wife by the sentence “O. J. killed Nicole.” But if you have seen pictures of Simpson, you
can also represent this hypothesis by the dynamic mental image of him slashing Nicole with a knife, a
kind of moving picture in your head. Similarly, scientists can represent the simplified structure of the
atom with the words “The electron revolves around the proton,” but they can also use diagrams or
mental pictures to represent this hypothesis visually. Other sensory images can also help constitute
hypotheses—for example, if you imagine the sound of Nicole screaming.
Whether hypotheses are expressed in words or in sensory images, in the brain they still amount to
the same thing: patterns of activation in neural populations. I already gave an idea of how this might
work when I discussed how concepts like drunk can contribute to explanations. If your brain has
neural populations for representing Simpson, Nicole, and killed, then it can also have patterns of
neural activity that represent the hypothesis that Simpson killed Nicole. Your mental representation of
Simpson can include both verbal information, such as that he used to be a football player, and visual
information, such as your memory of his face; so the patterns of neural activity that represent the
murder hypothesis can combine verbal and visual aspects. How the brain combines patterns of
activity in this way is still poorly understood. But the Eliasmith method of translating concepts and
relations into vectors, and vectors into activities of neural populations, shows the computational
feasibility of representing explanatory hypotheses using the behavior of large numbers of neurons.
With hypotheses and evidence represented by patterns of neural activity, we can build up even
higher-level relational assertions, such as: That Simpson killed Nicole explains why Nicole is dead.
But how shall we understand explains? Much philosophical discussion of explanation tries to
elucidate it in terms of logical relations such as deduction or mathematical ones such as probability,
but there are good philosophical and psychological reasons to maintain that explanations are causal.
That Nicole is dead is explained by Simpson's having killed her in that her death was (hypothetically)
caused by his actions. That combustion is oxidation explains why burning matter gains weight in that
oxidation causes weight gain. But now we have the problem of trying to understand causality.
I propose the hypothesis that much of our appreciation of causal relations is preverbal and
multimodal, shared with infants and nonhuman animals that lack language. Even at 2½ months, human
babies act surprised when colliding objects do not behave in normal ways, which suggests that they
already possess some elementary understanding of causality. The linguistic and mathematical
limitations of infants require us to look elsewhere for ideas about how they represent causality, which
I conjecture is mainly based on sensory-motor patterns. Babies have patterns of neural activation for
sensory experiences such as seeing a toy or hearing a bell, and they also have neural patterns
corresponding to sequences of motor behaviors such as reaching out and grabbing the toy. It would be
fascinating to work out an account of how neural populations can combine sensory and motor
patterns. For example, when a baby sees a rattle, grabs it, moves it, and then sees the toy in a different
place while hearing it make a sound, there is a repeated pattern of experience that is sensory-motorsensory.