Why Experts Break Their Own Lockdown Rules
The U.K. epidemiologist Neil Ferguson cheated regulations to see his lover. But he’s not the only scientist to fall victim to the Dunning-Kruger effect.
In 1992, a South Korean preacher called Lee Jang-rim convinced 10,000 people that the apocalypse was imminent. After believers had given up jobs, possessions, and even taken their own lives, it was revealed that Lee had used the money from his followers to make investments that would not pay out until well after the supposed date of the rapture.
For most of the public, the leader of a doomsday cult couldn’t be further from a credible source of advice. Inconsistent behavior like this just confirms our expectations. Things get harder in the case of authority figures who are more widely trusted, like public policy experts. What does it mean when experts act in ways at odds with what they’re telling everyone else—and does it mean that their advice can’t be trusted?
Last week, Britain’s Daily Telegraph reported that Neil Ferguson, an epidemiologist instrumental in developing the British government’s coronavirus lockdown, had flouted rules that he himself had helped compose by inviting a romantic partner to visit him at home.
The timing of the scoop couldn’t have been better for the government, drawing attention from the release of official data on the same day showing that the United Kingdom had the highest death toll from the coronavirus in Europe. Despite this news, the government has been considering a relaxation of its lockdown in the hope of limiting damage to the country’s economy. Ferguson, who resigned as a result of the story, now serves as a titillating—and politically convenient—example of how burdensome the current lockdown can be, even for one of its own architects.
[Mapping the Coronavirus Outbreak: Get daily updates on the pandemic.]
Pandemic response is not a straightforward subject, and, as for so many other aspects of our lives, we rely on external authorities to help us make decisions about our behavior. But choosing who to trust puts us in a bind, as our lack of expertise leaves us unable to directly judge an expert’s reliability. One of the few ways that we can assess the credibility of experts is by their consistency in following their own advice.
Yet Ferguson is not the only authority figure in the U.K. to have bent the rules of the lockdown.
Catherine Calderwood, Scotland’s chief medical officer, resigned in early April after details of her visits to a second home came to light. Robert Jenrick, a cabinet minister, drew fire in the same month for moving house from his London residence to a property in Herefordshire and for subsequently visiting his parents.
The problem of inconsistent experts isn’t peculiar to the U.K., either. Bruce Aylward, a veteran of the World Health Organization’s response to Ebola and the leader of the WHO-China joint mission on the coronavirus, appeared to ignore the mandatory quarantine period for visitors to Wuhan during its outbreak, justifying his actions to a Washington Post reporter on the grounds that he hadn’t visited any high-risk areas and that he had tested negative for the virus.
Nor are inconsistencies limited to questions of public health. In 2014, Pascal Husting, the international program director for Greenpeace, was heavily criticized for commuting between Luxembourg and Amsterdam on short-haul flights, a form of transport that his organization recognizes as one of the most environmentally damaging ways to travel.
If specialists on pandemics and climate change disregard their own advice when inconvenient, at best this suggests a degree of arrogance. At worst, they appear cynical about the seriousness of the problems that they have spent their professional lives working to address. The obvious danger is that their behavior will result in lower levels of public compliance with the measures needed in the face of these enormous challenges.
So, what drives experts to break their own rules?
David Dunning of the University of Michigan studies decision-making and is popularly known for his role in documenting the Dunning-Kruger effect, a phenomenon that relates to people’s skewed perceptions of their own capabilities. The Dunning-Kruger effect has entered popular culture as shorthand for inept people who overestimate their abilities, lacking the metacognition to recognize their own shortcomings. But Dunning observes that incompetents are not the only ones swayed by the Dunning-Kruger effect—experts are susceptible, too.
For experts, the issue is not an inflated sense of their own skills but rather the “curse of knowledge,” a mistaken assumption that everybody else shares at least a baseline understanding of their subject. “Experts know how well they are doing in an objective sense. It’s just that they think everybody else has this knowledge as well, and so they underestimate themselves relative to other people,” Dunning told Foreign Policy.
A conclusion that might be reasonable, even obvious, to an expert could seem counterintuitive to a layperson. A conclusion like: If I’m certain I have a tolerable level of risk, there’s little danger in making an exception.
Conclusions like this may even be correct in a technical sense. For example, Aylward in Wuhan and Ferguson in London were both privileged to have access to coronavirus testing at times when few tests were available. While all tests have a risk of false positives or false negatives, both experts undoubtably had more information about their level of risk than the rest of us, having both professional expertise and individual test results to inform their judgments.
However, it’s more likely is that they were treating themselves differently due to “motivated reasoning,” our tendency to think our way to conclusions that—not coincidentally—match our preferred outcomes. Dunning, who specializes in motivated reasoning, explained the process as follows: “Motivated reasoning is self-affirmational. You’re trying to come to some sort of conclusion that you favor that is benign or at least not threatening. Part of what fuels that is that we know about special circumstances for ourselves, and we overweight those circumstances to exempt ourselves from what might be true for other people.”
There isn’t necessarily a conflict between experts providing the general population with a simple, unambiguous message and their considering individual circumstances on a case-by-case basis. Edge cases exist, after all. The problem comes when experts start wondering if they themselves might be an edge case.
As Dunning noted: “If you’re a doctor talking to a patient about risk for surgery, it’s a bunch of percentages in your head. As soon as you become a patient, suddenly it’s not percentages; it’s not a long-run thing anymore. It’s you. It’s a completely different way of thinking.” Doctors do in fact make notoriously bad patients, and that’s despite experience in treating other individual patients and with the benefit of a colleague on hand to provide unbiased advice. Switching gears from population-level models to one’s own situation is an order of magnitude more difficult. “You very often see a disconnect between people’s judgments about what people in general should do and what I myself should do, especially when communicating risks,” Dunning said.
The difficulty for experts is that they cannot know whether they are making a completely objective judgment or allowing a subconscious thumb on the scale. As Dunning put it: “Intelligence and education don’t spare you. It’s not the case that intelligent or highly educated people are necessarily going to engage in less motivated reasoning than others, and we find that across the board. It doesn’t fuel motivated reasoning. It doesn’t prevent motivated reasoning. We’re all just human at the end of the day.”
While it’s interesting to understand what leads experts into making these decisions, in an important sense it’s immaterial whether or not they have an excuse for treating themselves differently. The damage is done either way.
An exception for an expert, even when justified, will inevitably become an example that the rest of us will factor into our own choices. Given our universal susceptibility to motivated reasoning, the expert’s behavior is certain to increase the rate of other, poorly justified exceptions among their audience.
Therefore, given their place in the British public eye, it’s not open to question that Ferguson and Calderwood made very serious errors of judgment. In some ways, however, it’s hard not to feel that they were treated a little unfairly. Of the three authority figures accused of flouting Britain’s lockdown, both of the scientists resigned. Whereas the cabinet minister—a politician, someone we might reasonably expect to be an expert in how his actions appear to others—remains in his post.
Experts working behind the scenes and providing advice to governments are vital. It’s less clear whether anyone benefits from bringing them to the center of the story and focusing on their individual choices. This approach sanitizes the messy process of reaching scientific consensus, as well as turning a fallible individual into a single point of failure for important ideas like “stay at home during a pandemic,” a message that epidemiologists—not only Ferguson—overwhelmingly agree on.
Being the center of a story, brushing off scandals, and selling newspapers are important ecological niches for politicians, and they thrive in them. Experts on the other hand thrive in their own fields, and there’s no reason to expect them to be media personalities as well.
Dunning put it this way: “Often, being competent at a task is knowing what the task is. Part of the reason that people may be incompetent is that they don’t know exactly what tasks they’re engaged in or they haven’t recognized a different task that matters, too.”
As the media thrusts policy experts onto the front pages of newspapers, they have to face a new reality: However expert they might think they are, when it comes to public relations and politics, they’re amateurs, and they need to play it as safe as possible.