Of all the examples presented in Actual Causality1, this one was my favorite.
Assume all variables are boolean variables. Let \(A\) mean a forest fire was started, let \(B\) mean lightning struck a tree, and let \(C\) mean an arsonist dropped a match. Then, we have a structural equation
\[A = B \lor C\]
meaning that \(A\) (the fire) was caused by either \(B\) (lightning) or \(C\) (arson), or possibly both. Intuitively, I would say that in cases where a fire started, the causes are fairly clear in each respective scenario.
However, there’s a second scenario to round out the example. Let \(A^\prime\) mean your survival, let \(B^\prime\) mean an assassin failed to poison your drink (so \(\lnot B^\prime\) means your drink was poisoned), and let \(C^\prime\) mean a bodyguard added antidote to your drink (which blocks poison, if there is any). Again, we can write a structural equation describing the causal structure of your (non)survival:
\[A^\prime = B^\prime \lor C^\prime\]
Different variable names, but otherwise the exact same equation as before! However, here my intuition says something different. I would not say that if the assassin failed to poison the drink, that the bodyguard caused my survival by also adding antidote. Whereas in the fire case, when both \(B\) and \(C\) are true, I would say that they both caused the fire. To be maximally explicit about this belief, let me put all the cases into a couple tables:
|\(A\)||\(B\)||\(C\)||cause of fire|
|False||False||False||n/a - no fire|
|True||True||True||\(B\) and \(C\) (both are causes of the fire)|
|\(A^\prime\)||\(B^\prime\)||\(C^\prime\)||cause of survival|
|False||False||False||n/a - no survival|
|True||True||False||\(B^\prime\) (failure of assassin)|
|True||True||True||\(B^\prime\) and \(C^\prime\)? Just \(B^\prime\)?|
The very last case is the interesting one—can we say that a bodyguard additionally caused my survival if it’s already true there is no poison in the drink? If we only wanted consistency with the model that works on the forest fire case, we’d have to say yes. This is one of those times where intuition and a model clash, but I believe that here the intuition should actually win. What we mean by “caused” is different from what happens when a bodyguard adds antidote to an unpoisoned drink. I think this says something interesting about causality—just because we model all the possible events correctly, that doesn’t mean we’ve captured the full causal structure. Despite the exact same equations of boolean logic, these cases feel very different.
How might we fix our model to better capture the difference between these two situations? One thing we might do is add an additional variable \(D\) to the causal network, describing whether the actions of \(C\) are of the first type (arson causes fire, no matter if something else does as well) or the second (antidote addition does not cause survival unless poison is also in the drink). This works alright, and only adds a bit of complexity to the model. It’s not as satisfying as the other answer Halpern presents, though.
There is an important difference between starting a fire and continuing your survival that isn’t described by the simple equation above. In some sense, survival was “expected” anyways, whereas fire was not, and “had to be caused”. In the world of the unpoisoned drink, when we attempt to say the bodyguard “caused survival”, we’re essentially saying that the guard caused something that was certainly going to happen anyways (in our simplified, model universe). This difference is captured by a concept called “normality”. Normally, fires don’t just start, so additional causes still seem to us to count as causes. However, normally your survival happens without any “cause”, so introducing one seems to not really count as a cause.
Note that this point is kind of subtle. Once the failure of the assassin occurs, adding antidote or not is irrelevant to survival. However, the same is true in the forest fire case. Once lightning strikes, whether or not the arsonist dropped a match is irrelevant. Our assignment of the match-dropping as a cause, and antidote-adding as not a cause, has to bring in a new fact not captured by the previous model.
Of course, normality brings with it its own new set of problems. How do we determine what’s really more “normal” in every case? Count up possible worlds? There’s an intuitive idea (and one I find quite compelling) here that’s tricky to formalize, and might not provide answers in every case. However, the important bit of learning for me was that the nature of causality in simple situations is not super easily captured by simple propositional logic—I might naively have guessed otherwise.
Actual Causality. Joseph Y. Halpern. MIT Press. 2016.↩︎