We begin with a grim example. In the 2002 campaign for presidential elections in Colombia, the leading candidate was Alvaro Uribe. Uribe attracted much attention in the campaign by adopting a hawkish stance vis-`a-vis leftist insurgents, and he was alleged to have ties to right-wing paramilitary groups.
Some leaders of these paramilitary groups threatened to “take reprisals against entire communities if Uribe fails to carry them” (Guillermoprieto 2002: 54).
(Uribe went on to win the elections decisively, in the first round of voting.) Hearing about the threat, a voter in a community where paramilitaries were active might well use a thumbnail version of expected-value reasoning to think about how to vote. She might say to herself, if Uribe doesn’t win in my community, the paramilitaries might kill me or a family member. To avoid such an outcome a majority of people in this community must vote for Uribe.
The probability of his carrying my community goes up (by a very small amount) if I vote for Uribe. Even though the increment by which my vote increases the probability of Uribe’s victory in my community is small, when I multiply this increment by the utility to me of avoiding being killed or having a family member killed, it becomes worth it to me to vote for Uribe.