statement to be true, it has to be necessarily true, true in all possible worlds, like any identity
statement, such as A = A. Hence the possibility of zombies shows that it is not necessarily true that
minds are brains, so it is not true at all. There must be more to consciousness than brain states.
There are several things wrong with the zombie argument. First, it is obviously too strong, for it
rules out many theoretical identifications that have been highly successful in the history of science.
Examples mentioned earlier in this chapter included water is H2O, combustion is rapid oxidation,
heat is motion of molecules, light is electromagnetic energy, and lightning is atmospheric electrical
discharge. I can easily imagine that lightning is not electrical—maybe the ancient Greeks were right
that it's just the God Zeus showing his powers. But the conceivability of lightning's not being
electrical does nothing to undermine the mass of evidence, accumulated since the eighteenth century,
that it is. By far the best explanation of this evidence includes the identity hypothesis that lightning
actually is electrical discharge in the atmosphere. As I argued in Chapter 2, thought experiments are
fine for suggesting and clarifying hypotheses, but it is folly to use them to try to justify the acceptance
of beliefs.
Second, the zombie argument assumes the philosophical idea of necessary truths as statements that
must be true, in contrast to ones that are true only of our world. As I argued in Chapter 2, the concept
of necessity is inherently problematic. We cannot simply say that necessity is truth in all possible
worlds, since necessity and possibility are interdefinable: something is possible if its negation is not
necessarily false. Nor can we define possibility in terms of conceivability, since what is conceivable
at any time is not absolute, but merely a contingent function of the available concepts and beliefs. It is
also not effective to say that something is possible if it is consistent with the laws of logic, since there
is much debate concerning what the laws of logic are. I described in Chapter 2 how even the
principle of noncontradiction, that no statement can be both true and false, has been disputed. Hence
the claim that such identity statements as “minds are brains” must be necessarily true is ill specified
and should not be used to challenge a claim for which there is substantial evidence. Chapter 5 will
provide more specific evidence that emotional conscious experiences are brain processes, along with
a theory that makes it clear why philosophical intuitions should not be mistaken for evidence.
Some philosophers think that ascription to the brain of psychological properties such as
consciousness is incoherent—it simply makes no sense. Well, it may not make sense if your
conceptual scheme is mired in dualism, but understanding the mind requires willingness to develop
and consider the evidence for new conceptual schemes. Just as the Copernican, Darwinian, and other
scientific revolutions required gradual appreciation of the explanatory force of new conceptual
schemes, so the Brain Revolution requires recognition of the explanatory gains that become available
when the neural mechanisms for mental processes such as perception are identified. The best
response to people who say that they just can't imagine how the mind could be the brain is: try harder.
Overcoming the compelling illusion that the mind is nonmaterial is not easy, but one can succeed in
doing so by acquiring sufficient understanding of neural mechanisms for thought and behavior.
Mind-brain identity is also challenged by nondualists who think that the development of computers
reveals the hypothesis that minds are brains to be much too narrow. The possibility of artificial
intelligence, which is the construction of computers capable of reasoning and learning, suggests that
we should identify mental processes more generally with computational processes that can occur, not
just in brains, but also in machines made out of silicon chips or other kinds of hardware. This view is
called functionalism, because it says that mental states are inherently functional, providing causal
connections between inputs and outputs in ways that produce intelligent behaviors. Computers and
other machines, or maybe even extraterrestrial organisms, can have such functional states without
having brains, so identification of mind and brain is a mistake. It is mental software that makes minds
work, and the particular hardware on which it runs is not very important. I found this computational
view appealing when I first got interested in cognitive science in 1978, but came to doubt it in the late
1980s when I began to work on neural network models, and even more in the 1990s when I started
research on emotion.
My first response to functionalism is that the mere possibility of intelligence supported by physical
systems other than brains is not sufficient to undercut the mind-brain identity hypothesis. Despite
decades of search for extraterrestrial intelligence, we have no evidence that there are minds
anywhere in the universe except on our meager planet. If such evidence arises and we can discern
anything at all about the nature of intelligent beings other than humans, I will be eager to see what can
be learned about their thinking processes. If their intelligence derives from physical systems very
different from our brains, I will be happy to retreat to the more modest hypothesis that human minds
are brains.
Similarly, if artificial intelligence substantially surpasses its rather modest accomplishments of the
past five decades, I would be willing to consider the possibility that there are multiple kinds of
minds, including the human variant that we can identify with brains and whatever machine mentalities
arise. Computer intelligence has had some remarkable successes in areas such as game playing,
robotics, and planning, but still falls far short of full human-level intelligence. Hence the idea that a
full range of mental processes can be implemented in many different kinds of physical processes is
still more a thought experiment than a piece of evidence that undermines the identification of mind and
brain.
In the first few decades of modern research in cognitive science, from the 1950s to the 1970s, it
seemed that progress in explaining the mind would come primarily from describing thought in terms
of computational processes independent of their neural underpinnings. But as I sketched earlier in this
chapter and will show in more detail in Chapters 4–8, much of the most exciting current progress in
cognitive science combines experimental studies of the brain with computational models of how it
works. This research suggests that mental processes are both neural and computational, combining the
basic insight of functionalism with the mind-brain identity theory.
Some current critics of mainstream cognitive science argue that its computational understanding of
mental processes has been fundamentally wrong because it ignores the nature of mind as embodied,
extended, and situated. Minds are embodied in that our thinking depends heavily on the ways our
bodies enable us to perceive and act in particular ways, not on abstract information-processing
capabilities. Thinking is extended and situated in that it occurs in ways heavily dependent on
interactions with our physical and social environments. Minds are part of the physical and social
worlds, not disembodied entities like desktop digital computers that just sit and crunch numbers. I
agree that minds are embodied, extended, and situated, but these claims pose no problem for mindbrain
identity, as brains are obviously embodied, extended, and situated too, in ways that will be
made clear in the chapters that follow. Particular ways that our bodies enable our brains to know
reality and to use emotion to appreciate its significance and relevance will be discussed in Chapters
4 and 5. We will see that the embodied and situated aspects of brains are compatible with an
understanding of their processes as representational and computational.