Everyone Was Wrong on the Pandemic’s Societal Impact
In March 2020, a study asked experts and laypeople for their predictions. Neither group came close to being right.
Listen to this article
Soon after the onset of the COVID-19 pandemic, it was clear to many that life was about to be fundamentally altered, possibly irrevocably. Discussions of a “new normal” spread on social media, in the popular press, and in scholarly publications, but it was unclear just what this new normal might look like. Beyond social distancing or working and learning from home, how would COVID-19 change our ways of thinking and behaving? Would depression and loneliness increase, or would people prove resilient and adaptable? Would relationships suffer as couples spent more time together? Would people become more open to cultural change as they were forced to adapt, or would they fall back on tradition and ritual?
In those uncertain early days, pundits, politicians, and celebrities alike offered their predictions and prescriptions. So too did some behavioral and social scientists. For us, a group of scholars who share an interest in understanding how social and behavioral science can best inform public policy, it was a golden opportunity to test just how expert experts are. In a large-scale undertaking beginning last April, we sought to track the extent to which social and behavioral scientists (including social and clinical psychologists, experts in judgments and decision-making, neuroscientists, economists, and political scientists) accurately predicted the impacts of COVID-19 on a set of psychological and behavioral domains—ranging from life satisfaction and loneliness to prejudice and violent crimes—in the United States. We also asked average Americans to make these predictions as well. Half a year later, we assessed the accuracy of these predictions.
So, how did COVID-19 reshape people’s psychology? Surprisingly, a steady stream of research findings suggest that far less has changed than one might expect. Loneliness, if it increased at all, did so by a minuscule amount. People’s satisfaction with relationships decreased, but the trend was again very small, a far cry from the dramatic change people were predicting last March. And people’s basic social motivations—to affiliate, or achieve status, or find romantic partners, or care for family—also showed little movement in response to the pandemic. In a National Science Foundation-funded study involving more than 15,000 research participants around the globe, only the motivation to avoid infectious disease showed a meaningful shift from pre-pandemic baselines. Unsurprisingly, it increased. Other ways of assessing change provide a similar picture. Using survey data from large, nationally representative samples, we found that there was little to no change in 10 diverse domains of human psychology and behavior—ranging from subjective well-being to traditionalism—that might have been expected to show dramatic movement.
These findings were unexpected by many, including the experts in human behavior and social dynamics in our studies, whose predictions turned out to be generally inaccurate. The majority of forecasts were off by at least 20 percent, and fewer than half of our participants correctly predicted the direction of changes. In what ways were these predictions off? Typically they were too extreme. In other words, human psychology and behavior showed more inertia than most of our participants anticipated. The only exception came in the domain of violent crime, where a 20 percent increase was observed from spring to late fall. Ironically, this was a domain where our participants predicted almost no change. How did experts compare with the average person? Surprisingly, our experts performed no better than the laypeople in our control group, making nearly identical (and equally inaccurate) predictions for the pandemic’s effects on a wide range of phenomena. Even more nuanced measures of expertise, such as an individual’s amount of social science training or experience studying the specific phenomenon being predicted, did not show any relationship to accuracy.
Perhaps experts are better at judging such trends retroactively? To address this possibility, in late October and early November we again recruited samples of social and behavioral scientists and laypeople. This time we asked them to estimate how much change had occurred in a variety of domains due to COVID-19 over the past six months. Interestingly, these retrospective estimates were very similar to the predictions made in the spring, but like those predictions were far from the actual trends. Even with the benefit of hindsight, people, experts included, still misjudged COVID-19’s effects.
Why were predictions and expectations for the pandemic’s societal impact so far off? Simply put, prediction is hard—even, or perhaps especially, for experts. In several forecasting tournaments dating back to the 1980s and continuing into the present, Philip Tetlock has shown that experts generally make poor forecasters of geopolitical events, often performing little better than the proverbial dart-throwing chimpanzee. Why? Tetlock identified a number of factors, key among them overconfidence and base rate neglect—a tendency to view each event as unique at the expense of considering how similar events may have unfolded in the past. In a similar vein, Daniel Kahneman and his colleagues have identified several cognitive biases that lead experts to make poor forecasts, including overemphasizing the role of salient current events, being too quick to make judgments, and being too slow to change their minds in the face of new evidence. Laypeople don’t tend to make for better forecasters either, falling prey to a number of heuristics and biases in their reasoning, as Kahneman and his longtime collaborator Amos Tversky showed in a research project for which they were awarded a Nobel Prize. People (experts included) also tend to be especially poor at predicting their own future feelings, misjudging the intensity and duration of their emotional reactions to events like winning the lottery or experiencing a painful breakup.
Does all this mean people should ignore the advice of behavioral and social scientists? No—although in fairness, as behavioral and social scientists, we would be inclined to say that. There are countless examples of successful application of theory and research from these fields to real-world problems, from Robert Cialdini’s use of descriptive norms to promote environmentally friendly behavior, to the Skinnerian sticker charts one of us used to get his daughter to brush her teeth regularly. Forecasting is just one subset of expertise—and perhaps the hardest.
And when it comes to prediction, these experts in human behavior and social dynamics seem to have at least one advantage: They appear to be more aware of their own limits. In our work, the experts were not superior forecasters, but they were humbler. The scientists in our studies were far less confident in their predictions than laypeople, suggesting that they knew their specific predictions should be taken with a grain of salt.
How can we become better forecasters? Here, Tetlock’s work on so-called superforecasters, individuals who make highly accurate forecasts, might be informative. Superforecasters seem to reason differently from others. They are more willing to acknowledge uncertainty, to seek out opposing viewpoints, and to update their beliefs in the face of new evidence. Studies suggest that similar reasoning strategies make people better at forecasting their own future emotions and that training in taking the viewpoint of a detached observer can enhance the likelihood this kind of thinking. Further, recent work by Tetlock and his collaborators has shown that a short training in probabilistic reasoning improves people’s ability to forecast geopolitical events.
Research on forecasting also suggests another key route to improving the accuracy of our predictions: practice. Independently of training in reasoning methods, more practice in forecasting enhances people’s forecasting accuracy. Unfortunately, behavioral and social science experts are generally trained to emphasize explanation over prediction. And such explanations for phenomena are often post hoc. In our field, psychology, even when predictions are made a priori, they typically are restricted to the outcome of a given experimental manipulation or a particular statistical analysis. Out-of-sample prediction is rare in psychology. Rarer still is ex ante prediction of real-world outcomes. Simply put, we have little training in how to make these kinds of predictions for real-world outcomes and little practice doing so. Beyond training ourselves to reason differently, we may become better forecasters by engaging in more forecasting. We should, as Tetlock suggests, “try, fail, analyze, adjust, and try again.”
Editor’s Note: This article is part of a series on what experts missed during the early days of the pandemic. Read Annie Sparrow on the Chinese government’s deadly misinformation here, and Ethan Guillen on American hubris here.
Cendri Hutcherson is an assistant professor and Canada research chair in the psychology department at the University of Toronto.