Abstract
Clark Glymour, together with his students Peter Spirtes and Richard Scheines, did pioneering work on graphical causal models (e.g. Spirtes et al., in Causation, prediction, and search, 2000). One of the central advances provided by these models is the ability to simply represent the effects of interventions. In an elegant paper (Meek and Glymour, in Br J Philos Sci 45: 1001–1021, 1994), Glymour and his student Christopher Meek applied these methods to problems in decision theory. One of the morals they drew was that causal decision theory should be understood in terms of interventions. I revisit their proposal, and extend the analysis by showing how graphical causal models might be used to address decision problems that arise in “exotic” situations, such as those involving crystal balls or time travelers.
Similar content being viewed by others
Notes
This is true specifically of hard interventions. There may also be soft interventions that influence the value of one or more variables without severing their dependence on their causal parents. We will briefly return to this point in the next section.
In all of the cases I discuss, the agent only cares about the value of a single variable \(O\), which is an effect of \(A\). More generally the outcomes may have to be conjunctions of values of the variables in a causal model.
I will consider only cases where the agent puts all of her credence in a single causal model. Moreover, I will assume that something like Lewis’s Principal Principle holds (Lewis 1980). That is, when an agent puts all of her credence in a causal model with probability function \(P\), her subjective probabilities will be identical to \(P\). Thus I will not take pains to distinguish subjective and objective probability. The case where the agent distributes her credence among several possible causal models is considerably more complex.
See also Stern (2014), who develops this idea in greater detail.
Lewis (1981) makes essentially the same point.
For instance Stern (2014) develops this objection to CDT explicitly, although he does not advocate EDT either.
The Principal Principle requires, in addition, that \(P\) be an appropriate initial credence. That is, it does not already incorporate information about the outcome of the coin toss.
We can also imagine a third kind of case, where you have information about what the result of a specific intervention would be, independently of whether you actually perform that intervention. This would be similar to the second kind of case described in the text.
An anonymous referee pointed out that information of the second kind (and of the kind described in the previous footnote) need not be strictly inadmissible, in Lewis’s sense. Specifically, information about this kind of counterfactual could in principle be deduced from facts about the past and present together with laws (or as Lewis calls them, “history-to-chance” conditionals). However, it would still be highly unusual to learn about the truth value of such a counterfactual directly, and use it to update your credence on propositions about the past, rather than to perform the inference in the other direction.
Initially, there are eight possible assignments of values to \(T\), \(F\), and \(C\), and each of these is equally likely. When we learn \(O= 0\) or 1, we eliminate the two possible assignments in which \(T= 1\) and \(F= 0\). That leaves three out of six assignments in which \(C= 1\) and four out of six assignments in which \(F= 1\).
This assumption is not unproblematic. Dummett (1964) argued that backwards causation is only possible if you are not able to acquire reliable independent information about the past effects of your actions. Clark Glymour (personal communication) expressed a similar worry about Egan’s scenario. Perhaps what your knowledge of Alexandria should tell you is that if you succeed in preventing the fire, you will somehow cause the books to be hidden somewhere, and presumed destroyed. Or perhaps you will bring them back with you to the present, also causing them to have disappeared from the ancient world. However, I will continue to make the assumption that is most favorable toward Egan’s argument.
For helpful comments and suggestions, thanks go to Frederick Eberhardt, Clark Glymour, Jim Woodward, and an anonymous referee for Synthese.
References
Dummett, M. (1964). Bringing about the past. Philosophical Review, 73, 338–359.
Eberhardt, F., & Scheines, R. (2007). Interventions and causal inference. Philosophy of Science, 74, 981–995.
Eells, E. (1982). Rational decision and causality. Cambridge: Cambridge University Press.
Egan, A. (2007). Some counterexamples to causal decision theory. Philosophical Review, 116, 93–114.
Gaifman, H. (1983). Paradoxes of infinity and self-applications I. Erkenntnis, 20, 131–155.
Gibbard, A., & Harper, W. (1978). Counterfactuals and two kinds of expected utility. In C. Hooker, J. Leach, & E. McClennen (Eds.), Foundations and applications of decision theory (pp. 125–162). Dordrecht: Reidel.
Jeffrey, R. (1983). The logic of decision (2nd ed.). Chicago: University of Chicago Press.
Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.
Joyce, J. (2012). Ratifiability and stability in causal decision theory. Synthese, 187, 123–145.
Levi, I. (1985). Common causes, smoking, and lung cancer. In R. Campbell & L. Snowden (Eds.), Paradoxes of rationality and cooperation. Vancouver: UBC Press.
Lewis, D. (1979). Counterfactual dependence and time’s arrow. Noûs 13, 455–476. (Reprinted in Lewis. (1986), pp. 32–52).
Lewis, D. (1980). A subjectivist’s guide to objective chance. In R. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. II, pp. 263–294). Berkeley: University of California Press. (Reprinted in Lewis. (1986), pp. 83–114).
Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy 59, 5–30. (Reprinted in Lewis. (1986), pp. 305–337).
Lewis, D. (1986). Philosophical papers (Vol. II). Oxford: Oxford University Press.
Meek, C., & Glymour, C. (1994). Conditioning and intervening. British Journal for the Philosophy of Science, 45, 1001–1021.
Nozick, R. (1969). Newcomb’s problem and two principles of rational choice. In N. Rescher (Ed.), Essays in honor of Carl G. Hempel (pp. 114–146). Dordrecht: Reidel.
Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). Cambridge: Cambridge University Press.
Price, H. (1986). Against causal decision theory. Synthese, 67, 195–212.
Price, H. (2012). Causation, chance, and the rational significance of supernatural evidence. Philosophical Review, 121, 483–538.
Richardson, T., & Robins, J. (2013). Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. Technical Report, Center for Statistics and the Social Sciences, University of Washington, http://www.csss.washington.edu/Papers/wp128.pdf. Accessed 27 Feb 2015.
Skyrms, B. (1980). Causal necessity. New Haven: Yale University Press.
Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search (2nd ed.). Cambridge, MA: MIT Press.
Stern, R. (2014). Decision and intervention, unpublished manuscript.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hitchcock, C. Conditioning, intervening, and decision. Synthese 193, 1157–1176 (2016). https://doi.org/10.1007/s11229-015-0710-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-015-0710-8