multimodal in that it involves multiple neural populations, including ones dedicated to sensory
representations. The psychologist Larry Barsalou reviews evidence that your concept of a car, for
example, is distributed across areas of the brain that include ones primarily concerned with visual
representations. Hence the mental pictures that you can make of cars may be part of your concept, as
may be the sounds and smells that you associate with cars. Concepts are patterns of activation in
neural populations that can include ones that are produced by, and maintain some of the structure of,
perceptual inputs.
Simulations with artificial neural networks enable us to see how concepts can have properties
associated with sets of exemplars and prototypes. When a neural network is trained with multiple
examples, it forms connections between its neurons that enable it to store the features of those
examples implicitly. These same connections also enable the population of connected neurons to
behave like a prototype, recognizing instances of a concept in accord with their ability to match
various typical features rather than having to satisfy a strict set of conditions. Thus even simulated
populations of artificial neurons much simpler than real ones in the brain can capture the exemplar
and prototype aspects of concepts.
It is much harder to understand how concepts as patterns of neural activation can play the
explanatory role required by the view that a crucial role of concepts like drunk is their contribution
to causal explanations. Perhaps the brain manages to use concepts in explanations by embedding them
in rules, such as: If X is drunk, then X stumbles. But what is the neural representation of the
connection between the concepts drunk and stumbles? This structure requires also some kind of
neural representation of if-then, which in this explanatory context involves some understanding of
causality: drunkenness causes stumbling. I will deal with the representation of causality later in this
chapter, but for now the main concern is to try to see how the brain could use neural populations to
represent that there is a relation between the concepts of drunk and stumbles.
The philosopher and theoretical neuroscientist Chris Eliasmith has been developing interesting
ideas about how brains can deal with such relations. I will omit the technical details, but will try to
give you the flavor of how this works in his computer simulations and how it might work in the brain.
Eliasmith has developed a general method for representing vectors, which are strings of numbers, in
neural populations. We can associate a concept with such a string—for example, in a simple way by
thinking of the numbers as the firing rates (number of electrical discharges per second) of the many
neurons the brain uses for the concept. (Eliasmith's method is more complicated.) Similarly, relations
such as cause and if-then can also have associated vectors. Now for the neat trick: there are
techniques for building vectors out of vectors, so that drunk causes stumbles can get a vector built
out of the vectors for drunk, causes, and stumbles. Crucially, the new vector retains structural
information, maintaining the distinction between “drunk causes stumbles” and “stumbles causes
drunk.” Once this whole relational structure is captured by a vector, we can use Eliasmith's method to
represent it in a population of thousands of neurons. Such neural representations can be transformed
in ways that support complex inferences such as if-then reasoning.
It is too early to say whether the brain uses anything like Eliasmith's mathematical technique to
build structure into vectors and then translate them into neural activity. But his work suggests one
possible mechanism whereby the brain might combine concepts into more complicated kinds of
relational representations. Hence we have a start at seeing how concepts can function in the
explanatory way suggested by the knowledge view: explanations are built out of complexes of
relations that can be represented in brain patterns. Moreover, because concepts on this view have the
same underlying nature as patterns of activation in neural populations, the knowledge view remains