Tim Harford’s UNDERCOVER ECONOMIST at G magazine
We don’t have a good sense of our own fallibility. Checking my answers, it was the one I felt the most certain of that I got wrong
In 1913 Robert Millikan published the results of one of the most famous experiments in the history of physics: the “oil drop” experiment that revealed both the electric charge on an electron and, indirectly, the mass of the electron too. The experiment led in part to a Nobel Prize for Millikan but it is simple enough for a school kid to carry it out. I was one of countless thousands who did just that as a child, although I found it hard to get my answers quite as neat as Millikan’s.
We now know that even Millikan didn’t get his answers quite as neat as he claimed he did. He systematically omitted observations that didn’t suit him, and lied about those omissions. Historians of science argue about the seriousness of this cherry-picking, ethically and practically. What seems clear is that if the scientific world had seen all of Millikan’s results, it would have had less confidence that his answer was right.
This would have been no bad thing, because Millikan’s answer was too low. The error wasn’t huge — about 0.6 per cent — but it was vast relative to his stated confidence in the result. (For the technically minded, Millikan’s answer is several standard deviations away from modern estimates: that’s plenty big enough.)
There is a lesson here for all of us about overconfidence. Think for a moment: how old was President Kennedy when he was assassinated? How high is the summit of Mount Kilimanjaro? What was the average speed of the winner of last year’s Monaco F1 Grand Prix? Most people do not know the exact answers to these questions but we can all take a guess.
Let me take a guess myself. JFK was a young president but I’m pretty sure he was over 40 when elected. I’m going to say that when he died he was older than 40 but younger than 60. I climbed Kilimanjaro many years ago and I remember it being 6,090-ish metres high. Let’s say, more than 6,000m but less than 6,300m. As for the racing cars, I think they can do a couple of hundred miles an hour but I know that Monaco is a slow and twisty track. I’ll estimate that the average speed was above 80mph but below 150mph.
Psychologists have conducted experiments asking people to answer such questions with upper and lower bounds for their answers. We don’t do very well. Asked to produce wide margins of error, such that 98 per cent of answers fall within that margin, people usually miss the target 20-40 per cent of the time; asked to produce a tighter margin, such that half the answers are correct, people miss the target two-thirds of the time.
We don’t have a good sense of our own fallibility. Despite the fact that I am well aware of such research, when I went back to check my own answers, it was the one I felt most certain of that I got wrong: Kilimanjaro is just 5,895m high. It seemed bigger at the time.
But there’s another issue here. The charismatic Nobel laureate Richard Feynman pointed out in the early 1970s that the process of fixing Millikan’s error with better measurements was a strange one: “One is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover the new number was higher right away?”
What was probably happening was that whenever a number was close to Millikan’s, it was accepted without too much scrutiny. When a number seemed off it would be viewed with scepticism and reasons would be found to discard it. And since Millikan’s estimate was too low, those suspect measurements would typically be larger than Millikan’s. Accepting them was a long and gradual process.
Feynman added that scientists have learnt their lesson and don’t make such mistakes any more. Perhaps that’s true, although a paper published by the decision scientists Max Henrion and Baruch Fischhoff, almost 15 years after Feynman’s lecture, found that same pattern of gradual convergence in other estimates of physical constants such as Avogadro’s number and Planck’s constant. From the perspective of the 1980s, convergence continued throughout the 1950s and 1960s and sometimes into the 1970s.
Perhaps that drift continues today even in physics. Surely it continues in messier fields of academic inquiry such as medicine, psychology and economics. The lessons seem clear enough. First, to be open to ourselves and to others about the messy fringes of our experiments and data; they may not change our conclusions but they should reduce our overconfidence in those conclusions. Second, to think hard about the ways in which our conclusions may be wrong. Third, to seek diversity: diversity of views and of data-gathering methods. Once we look at the same problem from several angles, we have more chances to spot our errors.
But humans being what they are, this problem isn’t likely to go away. It’s very easy to fool ourselves at the best of times. It’s particularly easy to fool ourselves when we already think we have the answer.