"Survivorship bias" and sampling error – Benevolent dolphins and World War II bombers
An item about designing US bombers in World War II (picked up by Andrew Sullivan) reminded me of some idle musings about dolphins that first occurred to me several decades ago (and that I once blogged about here). It so happens that, at a formal level, both of these topics raise some of the same analytical issues.
I will start with my idle speculations about dolphins, shamelessly reproducing some thoughts I already posted, and then turn to the actual historical incident involving bombers.
=> We constantly hear about how charming and likable dolphins are, and most of the time that seems to be true. Let me be clear: I'm genuinely fond of dolphins, who are friendly and intelligent animals. What follows is a purely analytical puzzle.
In things I have read about dolphins, one piece of evidence offered to demonstrate their good-natured benevolence was the claim that dolphins sometimes save drowning sailors by pushing them in toward shore. That sounds nice of them, and I have no reason to doubt that there are such cases.
But then I couldn't help wondering ... what if this is just a misleading impression created by sampling bias? That is, what if a dolphin sees a drowning sailor as a kind of bathtub toy, and enjoys pushing him (or her) around in the water in a spirit of good-natured play? And let's imagine—to continue the hypothesis—that such sailors would get pushed around randomly in different directions. That would mean that about a quarter of the sailors get pushed toward shore, while the other three-quarters get pushed either out to sea or parallel to the coastline. Well, the only sailors we hear from afterward are the ones who got pushed toward shore, right? The other 75% are eliminated from the sample, so to speak. So maybe this impression that the dolphins are doing it to help the sailors is just an unwarranted inference produced by systematically biased data? (After all, what have humans ever done for dolphins that would make them so eager to help us out?)
All this is just speculation, of course ... but I wouldn't want try getting an experiment designed to test this hypothesis approved by an IRB.
=> Now to a case of real-life decision-making. David McRaney runs a blog titled You Are Not So Smart that analyzes common forms of systematically misleading and self-deceptive thinking. In a recent post he nicely explained how the hypothetical fallacy I just imagined would work in practice, using a concrete example from World War II. The mechanism involved is one that game theorists call "survivorship bias".
During World War II, it seems, the US government put together groups of statisticians and mathematicians, collectively known as the Applied Mathematics Panel, to help the military solve tactical and technological problems.
The problem of "survivorship bias" is one specific instance of a larger phenomenon that analytical philosophers refer to as the problem of "counterfactuals". For some other takes on the problem of counterfactuals, see here & here.
—Jeff Weintraub
P.S. [10/31/2016]: And check out this cartoon. (Thanks to Jacob Levy for the tip.)
I will start with my idle speculations about dolphins, shamelessly reproducing some thoughts I already posted, and then turn to the actual historical incident involving bombers.
=> We constantly hear about how charming and likable dolphins are, and most of the time that seems to be true. Let me be clear: I'm genuinely fond of dolphins, who are friendly and intelligent animals. What follows is a purely analytical puzzle.
In things I have read about dolphins, one piece of evidence offered to demonstrate their good-natured benevolence was the claim that dolphins sometimes save drowning sailors by pushing them in toward shore. That sounds nice of them, and I have no reason to doubt that there are such cases.
But then I couldn't help wondering ... what if this is just a misleading impression created by sampling bias? That is, what if a dolphin sees a drowning sailor as a kind of bathtub toy, and enjoys pushing him (or her) around in the water in a spirit of good-natured play? And let's imagine—to continue the hypothesis—that such sailors would get pushed around randomly in different directions. That would mean that about a quarter of the sailors get pushed toward shore, while the other three-quarters get pushed either out to sea or parallel to the coastline. Well, the only sailors we hear from afterward are the ones who got pushed toward shore, right? The other 75% are eliminated from the sample, so to speak. So maybe this impression that the dolphins are doing it to help the sailors is just an unwarranted inference produced by systematically biased data? (After all, what have humans ever done for dolphins that would make them so eager to help us out?)
All this is just speculation, of course ... but I wouldn't want try getting an experiment designed to test this hypothesis approved by an IRB.
=> Now to a case of real-life decision-making. David McRaney runs a blog titled You Are Not So Smart that analyzes common forms of systematically misleading and self-deceptive thinking. In a recent post he nicely explained how the hypothetical fallacy I just imagined would work in practice, using a concrete example from World War II. The mechanism involved is one that game theorists call "survivorship bias".
During World War II, it seems, the US government put together groups of statisticians and mathematicians, collectively known as the Applied Mathematics Panel, to help the military solve tactical and technological problems.
One story in particular was nearly lost forever. In it, a brilliant statistician named Abraham Wald saved countless lives by preventing a group of military commanders from committing a common human error, a mistake that you probably make every single day. [....]McRaney sums up the key analytical lesson, which is relevant and important in a wide range of situations.
How, the Army Air Force asked, could they improve the odds of a bomber making it home? Military engineers explained to the statistician that they already knew the allied bombers needed more armor, but the ground crews couldn’t just cover the planes like tanks, not if they wanted them to take off. The operational commanders asked for help figuring out the best places to add what little protection they could. It was here that Wald prevented the military from falling prey to survivorship bias [....]
The military looked at the bombers that had returned from enemy territory. They recorded where those planes had taken the most damage. Over and over again, they saw the bullet holes tended to accumulate along the wings, around the tail gunner, and down the center of the body. Wings. Body. Tail gunner. Considering this information, where would you put the extra armor? Naturally, the commanders wanted to put the thicker protection where they could clearly see the most damage, where the holes clustered. But Wald said no, that would be precisely the wrong decision. Putting the armor there wouldn’t improve their chances at all.
Do you understand why it was a foolish idea? The mistake, which Wald saw instantly, was that the holes showed where the planes were strongest. The holes showed where a bomber could be shot and still survive the flight home, Wald explained. After all, here they were, holes and all. It was the planes that weren’t there that needed extra protection, and they had needed it in places that these planes had not. The holes in the surviving planes actually revealed the locations that needed the least additional armor. Look at where the survivors are unharmed, he said, and that’s where these bombers are most vulnerable; that’s where the planes that didn’t make it back were hit. [my boldings] [....]
Simply put, survivorship bias is your tendency to focus on survivors instead of whatever you would call a non-survivor depending on the situation. Sometimes that means you tend to focus on the living instead of the dead, or on winners instead of losers, or on successes instead of failures. In Wald’s problem, the military focused on the planes that made it home and almost made a terrible decision because they ignored the ones that got shot down. [....]
The Misconception: You should study the successful if you wish to become successful.=> More generally, you always have to compare, explicitly or implicitly. And sometimes you have to compare what you see, and what you know actually happened, with what you can't see, or perhaps didn't even happen ... but could have happened.
The Truth: When failure becomes invisible, the difference between failure and success may also become invisible.
The problem of "survivorship bias" is one specific instance of a larger phenomenon that analytical philosophers refer to as the problem of "counterfactuals". For some other takes on the problem of counterfactuals, see here & here.
—Jeff Weintraub
P.S. [10/31/2016]: And check out this cartoon. (Thanks to Jacob Levy for the tip.)
<< Home