The philosopher and theoretical neuroscientist Chris Eliasmith has been developing interesting ideas about how brains can deal with such relations. I will omit the technical details, but will try to give you the flavor of how this works in his computer simulations and how it might work in the brain. Eliasmith has developed a general method for representing vectors, which are strings of numbers, in neural populations. We can associate a concept with such a string—for example, in a simple way by thinking of the numbers as the firing rates (number of electrical discharges per second) of the many neurons the brain uses for the concept. (Eliasmith's method is more complicated.) Similarly, relations such as cause and if-then can also have associated vectors. Now for the neat trick: there are techniques for building vectors out of vectors, so that drunk causes stumbles can get a vector built out of the vectors for drunk, causes, and stumbles. Crucially, the new vector retains structural information, maintaining the distinction between “drunk causes stumbles” and “stumbles causes drunk.” Once this whole relational structure is captured by a vector, we can use Eliasmith's method to represent it in a population of thousands of neurons. Such neural representations can be transformed in ways that support complex inferences such as if-then reasoning.