Causal Inference and Decision Making
Decisions inevitably require considering how actions are causally related to outcomes. The reason I choose to drink coffee is because I prefer the causal effects of drinking to those of not drinking. Those effects might be physiological, social, they might have to do with my thought process, or some combination. What I choose to do is a cause, as is the environment in which I choose to do it, and the benefits and costs I obtain as a result are effects. So the central ability required to make a decision is to be able to reason from causes to effects. Surprisingly, the canonical model of decision making, expected utility theory, is not concerned with causal relations. It tells us to make choices, to maximize the probability of achieving the most valued outcomes, but the theory is silent about how to determine those probabilities. The counsel to maximize expected utility would, on the face of it, suggest that all you need to know is the likelihood of events and your own preferences to make rational decisions. But the probabilities that are relevant to decision making must reflect the likelihood of outcomes given that the relevant options are actively chosen and not merely observed. Determining these probabilities in general requires a causal model and the ability to reason about it.
Causation is at the heart of decision making in another sense: The choice itself must be caused. An understanding of the causes of choice is exactly what those of us who study how people make decisions are pursuing. But the issue doesn't only loom large for scientists; it is critical for decision makers too. For instance, it determines social choices (have I really chosen to be in this relationship?), decisions about how to expend effort (is this task intrinsically interesting or am I doing it just because somebody is watching me?), and it is central to issues of addiction and control (is my choice to drink coffee a product of my will, my craving, or something else?).
In general, a key problem for decision makers is to know whether their choices should be conceived as interventions that are produced by their own independent agency (free will) or whether they should be conceived as observations produced by forces outside of the decision maker's control, be they external – e.g., parental dictates – or internal – e.g., an inexorable craving. Observations differ from interventions in that the decision maker has no privileged access to his or her choice; he or she is like someone else observing the choice and making corresponding inferences. Of course, both types of cause could contribute to choice simultaneously. Our decisions might be – and presumably often are –jointly determined by our own will and the constraints offered by our unconscious selves and by the environment.
This problem of how to conceive of choice has far reaching consequences. Beyond what has already been discussed, it emerges when one tries to assign moral responsibility for an action because one is presumably not responsible for an action over which one has no control. It emerges in the study of hypnosis and dissociative disorders because they presumably reduce the role of the agent's free will. It emerges in cases of self-handicapping as the decision maker tries to minimize responsibility for an outcome by reducing the ability to control actions. And illusory conceptions have been created in the laboratory, for example by Wegner (2002; Wegner & Wheatley, 1999) who put people in situations in which they believed they had control over an outcome but in fact did not. The problems caused by the ambiguity of the causes of choice have also plagued philosophy because they provide the basis for many paradoxes, especially those that concern free will. How to understand the determinants of our choices is a deep and fundamental issue.
The issue comes to a head in the study of self-deception. There are rampant debates in philosophy about whether in fact people ever really do deceive themselves (e.g., Mele 1997; Walker, 2010). But what is at least a weak form of self-deception has been demonstrated in the laboratory (Quattrone & Tversky, 1984; Sloman, Hagmayer, & Fernbach, 2010), namely cases where people's beliefs contradict their actions (see Paulhus, 2008). The logic of the demonstrations is illustrated by Sloman et al., (2010) who had participants play a video game. One group was told that more intelligent people play faster and a second group was told that more intelligent people play more slowly (elaborate justifications were offered). Participants' speed of play supported their desired inference – that they were intelligent – in that the first group played faster than the second. Such changes in behavior due to a conceptual belief imply that people willfully changed their performance; they intervened. However, by doing so they rendered their actions non-diagnostic (to the extent their speed was determined by willful action, it was not determined by their intelligence). Sloman et al., (2010) refer to cases like this in which people believe they have less control over their actions than they do as "diagnostic self-deception." They also posit a different kind of self-deception that underlies addiction that they call "interventional self-deception." In such cases, people believe they have more control over their actions than in fact they do, just as teenagers who smoke tend to believe that they can quit at any time (Slovic, 2001). We are currently investigating the cognitive and neural bases of self-deception in decision making, conducting a series of experiments that manipulate the (i) agents perceived level of control and (ii) causal determinants of choice behavior.