Concepts
There is much more to knowledge of reality than sensory experience. Human discourse is full of
concepts, including knowledge and reality, that are not directly tied to what we can see, touch, taste,
smell, or hear. Philosophers, psychologists, and now neuroscientists attempt to figure out the nature of
such concepts. For Plato, concepts were abstract entities he called the forms, existing in some
heavenly realm graspable by souls. In contemporary cognitive science, concepts are mental
representations, which the previous chapter implies are brain representations. A major current
research problem is to figure out how patterns of neural firing play all the roles needed to explain the
many cognitive uses of concepts.
Greg Murphy's Big Book of Concepts provides a thorough review of current psychological
theories. According to the classical theory, still assumed by many philosophers and nonacademics,
we can strictly define concepts by giving their necessary and sufficient conditions. For example, the
concept of a triangle consists of the definition that a figure is a triangle if and only if it has exactly
three sides. Unfortunately, few concepts outside mathematics are amenable to such strict definitions.
This difficulty applies not only to abstract concepts like reality, but also to many everyday concepts
such as chair and cat. If you don't believe this, try to give a rigorous definition of a chair that includes
everything you want without arbitrary exclusions: must it have a back, legs, or what? Nevertheless, a
full theory of concepts would require room for the existence of those rare concepts such as
mathematical ones that are actually definable.
In the 1970s, some philosophers, psychologists, and computer scientists advocated a more relaxed
view of concepts as prototypes, which are mental representations that specify typical rather than
defining properties. Whereas a definition attempts to list those properties possessed by all and only
chairs, a prototype just includes features that are typical of chairs. Prototypes are more flexible than
definitions, and there are experimental reasons to think that they give a better account of the
psychology of concepts. However, they may not be flexible enough, so some psychologists have
claimed that people actually store concepts, not as prototypes but as sets of examples, so that your
concept of a chair consists of a stored representation of many different chairs. This claim is called the
exemplar theory of concepts.
The other major account of concepts currently discussed by psychologists is called the knowledge
view or sometimes the theory-theory. This view points to the large role that concepts play in
providing explanations. For example, your concept drunk helps in explaining the behavior of people
who have had too much alcohol, as when you say that Fred crashed his car because he was drunk.
Then a major part of a concept is not just its defining characteristics or its typical conditions or its set
of associated examples, but the causal relations it identifies between things. Another complication in
recent experimental work on concepts is the suggestion that many concepts are inherently multimodal,
having a large sensory component such as visual, tactile, or auditory, not just a verbal one. For
example, your concept of a chair may be highly visual if it involves pictorial representations derived
from previous perceptions of chairs. Your concept of a drunk may be partly olfactory if it includes the
smell of alcohol on a person's breath
Although psychological evidence counts against the classical account of concepts as strictly
definable, it does not suffice to enable us to choose definitively among prototype, exemplar,
knowledge, and multimodal theories.But I see no reason to take these as competing views; rather I
prefer to interpret them as capturing various aspects of how concepts are represented in the brain.
Some concepts like mathematical ones may even be definable. In chapter 3 I suggested that concepts
and other mental representations are patterns of neural activity. What I need to show now is that the
brain-based view of concepts can support all these diverse aspects of concepts.
It is not hard to see how multimodal, exemplar, and prototype characteristics of concepts can be
supported by neural populations. A concept does not have to involve activation in just one neural
population in an isolated area of the brain restricted to language processing. A concept can be
multimodal in that it involves multiple neural populations, including ones dedicated to sensory
representations. The psychologist Larry Barsalou reviews evidence that your concept of a car, for
example, is distributed across areas of the brain that include ones primarily concerned with visual
representations. Hence the mental pictures that you can make of cars may be part of your concept, as
may be the sounds and smells that you associate with cars. Concepts are patterns of activation in
neural populations that can include ones that are produced by, and maintain some of the structure of,
perceptual inputs.
Simulations with artificial neural networks enable us to see how concepts can have properties
associated with sets of exemplars and prototypes. When a neural network is trained with multiple
examples, it forms connections between its neurons that enable it to store the features of those
examples implicitly. These same connections also enable the population of connected neurons to
behave like a prototype, recognizing instances of a concept in accord with their ability to match
various typical features rather than having to satisfy a strict set of conditions. Thus even simulated
populations of artificial neurons much simpler than real ones in the brain can capture the exemplar
and prototype aspects of concepts.
It is much harder to understand how concepts as patterns of neural activation can play the
explanatory role required by the view that a crucial role of concepts like drunk is their contribution
to causal explanations. Perhaps the brain manages to use concepts in explanations by embedding them
in rules, such as: If X is drunk, then X stumbles. But what is the neural representation of the
connection between the concepts drunk and stumbles? This structure requires also some kind of
neural representation of if-then, which in this explanatory context involves some understanding of
causality: drunkenness causes stumbling. I will deal with the representation of causality later in this
chapter, but for now the main concern is to try to see how the brain could use neural populations to
represent that there is a relation between the concepts of drunk and stumbles.
The philosopher and theoretical neuroscientist Chris Eliasmith has been developing interesting
ideas about how brains can deal with such relations. I will omit the technical details, but will try to
give you the flavor of how this works in his computer simulations and how it might work in the brain.
Eliasmith has developed a general method for representing vectors, which are strings of numbers, in
neural populations. We can associate a concept with such a string—for example, in a simple way by
thinking of the numbers as the firing rates (number of electrical discharges per second) of the many
neurons the brain uses for the concept. (Eliasmith's method is more complicated.) Similarly, relations
such as cause and if-then can also have associated vectors. Now for the neat trick: there are
techniques for building vectors out of vectors, so that drunk causes stumbles can get a vector built
out of the vectors for drunk, causes, and stumbles. Crucially, the new vector retains structural
information, maintaining the distinction between “drunk causes stumbles” and “stumbles causes
drunk.” Once this whole relational structure is captured by a vector, we can use Eliasmith's method to
represent it in a population of thousands of neurons. Such neural representations can be transformed
in ways that support complex inferences such as if-then reasoning.
It is too early to say whether the brain uses anything like Eliasmith's mathematical technique to
build structure into vectors and then translate them into neural activity. But his work suggests one
possible mechanism whereby the brain might combine concepts into more complicated kinds of
relational representations. Hence we have a start at seeing how concepts can function in the
explanatory way suggested by the knowledge view: explanations are built out of complexes of
relations that can be represented in brain patterns. Moreover, because concepts on this view have the
same underlying nature as patterns of activation in neural populations, the knowledge view remains
compatible with prototype, exemplar, and multimodal views of concepts. In those rare cases where
strict definitions of concepts are available, as in a triangle is a figure with three sides, the necessary
and sufficient conditions can be represented by relations between concepts that can be captured by
vectors of vectors and then modeled as neural activity.
From my simplified account, one might worry that the account of concepts as patterns of neural
activation is so vague and general that it would be compatible with any view of concepts at all, thus
lacking any content. We can overcome this worry first by looking at the detailed mathematical
analyses and computational simulations that are already available to show how artificial neural
populations can have the desired properties required for modeling exemplars, prototypes, and
relations among concepts. Second, the account I have been offering is strongly incompatible with at
least one currently prominent view of concepts, the conceptual atomism of Jerry Fodor. According to
atomism, lexical concepts (ones for which we have words) have no semantic structure at all and get
their meaning only from their relation with the world.