Our studies show that people are often unable to escape
the pull of their prior attitudes and beliefs,whichguide the
processing of new information in predictable and sometimes
insidious ways. But what does this mean for citizens
in a democracy? From one perspective the average citizen
would appear to be both cognitively and motivationally
incapable of fulfilling the requirements of rational behavior
in a democracy. Far from the rational calculator
portrayed in enlightenment prose and spatial equations,
homo politicus would seem to be a creature of simple likes
and prejudices that are quite resistant to change. Can this
possibly be rational? The normative question, it seems,
turns on whether the processing of new information and
the updating of one’s attitude needs to be independent of
one’s priors.
From one point of view with which we are sympathetic,
it can be argued that the attitude strength effect and
disconfirmation bias are rational responses to attituderelevant
information; it is perfectly reasonable to give
heavyweight to one’s own carefully constructed attitudes.
This perspective, which would substitute the word “skepticism”
wherever “bias” appears in this article, suggests
that beliefs and attitudes may be thought of metaphorically
as possessions to be protected (Abelson and Prentice
1989). This belief, this feeling, is mine! Like other possessions
we paid a purchasing price in terms of time and
cognitive resources spent forming and updating our impressions.
Many political attitudes, especially those linked
to identity (Conover 1988), are worthy of such defense in their own right. To the extent one’s attitude reflects considerable
prior thought, it may well be more trustworthy
than new information, especially if—as is so often the
case in the political realm—that new information reflects
the strategic behavior of political opponents. Simply put,
if one thinks (more pointedly, feels) that the veracity of
the evidence is dubious, the opposition is wrong, or the
media hostile, then why pay them heed?
From another perspective, with which we also have
sympathy, Bayesian updating requires independence between
priors and new evidence (Evans and Over 1996;
Green and Shapiro 1994; but seeGerber andGreen 1998).
In the extreme, if one distorts new information so that it
always supports one’s priors, one cannot be rationally responsive
to the environment; similarly, manipulating the
information stream to avoid any threat to one’s priors is
no more rational than the proverbial ostrich.
For many citizens, perhaps, the bias may be less extreme,
but there are certainly ideologues and bigots who
fit both of these descriptions. Luker (1984), for example,
found that attitudes among abortion activists are so
linked to their beliefs and feelings about sexuality, gender,
religion, and family, that they have become completely incapable
of entertaining points of view that challenge their
own. Sears andWhitney (1973) have found similar stubborn
adherence to prior attitudes among those watching
a political debate. Our own evidence, presented above,
presents a compelling case that motivated biases come to
the fore in the processing of political arguments even for
nonzealots.
On the other hand and contrary to the intuitions of
normative theory (but consistent with the predictions of
cognitive psychology), we do find that those with weak
and uninformed attitudes showless bias in processing political
arguments. This finding may tempt the conclusion
that objectivity and tolerance rest more on ignorance and
apathy than on the elite skills of ideal citizens. Perhaps we
have been looking for rational citizenship in all the wrong
places, and it is the great unwashed who save democracy!
Provocative though itmay be, this interpretation does not
stand up to normative, theoretical, or empirical scrutiny.
First, we find no empirical evidence of principledmoderation
among the bottom or middle thirds of our sample,
whose extremity scores were statistically indistinguishable
from those of the most sophisticated participants.
Second, our theory predicts less bias for unsophisticated
and uncommitted respondents not because they possess a
greater sense of evenhandedness, but rather because they
lack the motivation and ability to engage in attitude defense.
Finally, this same lack of motivation and knowledge
undermines the ability to apply individual preferences
to public policy that underlies a normatively secure democracy, so it would be a dysfunctional objectivity at
best.
If we push either side of the rationality argument too
stronglywe endupplaying the clown. Sohowdowe reconcile
these positions? Skepticism is valuable and attitudes
should have inertia. But skepticism becomes bias when it
becomes unreasonably resistant to change and especially
when it leads one to avoid information as with the confirmation
bias. And polarization seems to us difficult to
square with a normatively acceptable model (especially
since the supporters and opponents in the policy debate
will diverge after processing exactly the same information).
Moreover, up to some tipping point for persuasion,
our model predicts polarization even from unbalanced
and counterattitudinal streams of information (see also
Rahn, Aldrich, and Borgida 1993; Redlawsk 2001).
How we determine the boundary line between rational
skepticism and irrational bias is a critical normative
question, but one that empirical research may not be
able to address. Research can explore the conditions under
which persuasion occurs (as social psychologists have
for decades), but it cannot establish the conditions under
which it should occur. It is, of course, the latter question
that needs answering if we are to resolve the controversy
over the rationality of motivated reasoning.