The Case Against Big Tech’s Election Strategies

Misinformation is hyperlocal. Attempts to counter it should be, too.

By Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
A demonstrator holds up a placard reading “Fake News: Trump Tested Positive” in Konstanz, Germany, on Oct. 3, 2020.
A demonstrator holds up a placard reading “Fake News: Trump Tested Positive” in Konstanz, Germany, on Oct. 3, 2020. Sebastien Bozon/AFP/Getty Images

As the remaining days before the U.S. presidential election dwindle, misinformation is approaching a crescendo—and the big digital platforms are racing to keep up. This week, Twitter will turn off some of its features and Google has changed its search results. Facebook, meanwhile, will ban all new political advertisements for the week in advance of the election, has blocked the conspiracy theory group, QAnon, and has deleted a post by President Donald Trump claiming COVID-19 is less lethal than the flu. Most recently, both Facebook and Twitter have been drawn into a fiery controversy for intervening—hiding, blocking, and then unblocking—a front page story in the New York Post about Joe Biden.

Inevitably, digital platforms will struggle to keep up. Fact-checking organizations will do their best to refute untruths, yet after the genie has escaped the bottle, there is little they can do, since untruths travel further and faster than facts. Witness the New York TimesDaily Distortions section devoted to all the news that is, in fact, not fit to print; it will serve as amusement for the Times’ regular readers, sure, but it is hard to imagine that it will do much to persuade those who already mistrust the mainstream media.

And that points to what is missing from existing efforts to battle fake news. The major social media platforms are adopting blunt instruments to tamp down misinformation. But misinformation, like politics, is local. It plays to individuals’ fears, anxieties, predilections, hopes, and desires. Often, the kernel of misinformation campaigns is a real local incident, which is then distorted. (For example, a video that Trump supporters claim shows shredded mail-in ballot applications in Pennsylvania actually seems to show shredded “print production waste.”) Then, there is no getting around the fact that some parts of the United States are more vulnerable to misinformation campaigns than others; and some parts are also more likely to be targeted with misinformation because they play an outsized role in determining the outcome of elections.

All this points to a new strategy for countering misinformation.

Consider Florida. A battleground state with 29 of 270 electoral college votes, it is one of the most highly contested states in the presidential race. It has faced a highly specific misinformation campaign focused on issues likely to resonate with state’s Latino communities, which both parties are trying to win over. Spanish-language YouTube channels, WhatsApp message chains, and pro-Trump Facebook groups have been saturated with conspiracies ranging from the existence of a “deep state” designed in this case to evoke fears among people from authoritarian regimes like Venezuela and Cuba to rumors that the billionaire investor, George Soros, is funding caravans of Central Americans in their attempt to come to the United States, to false allegations of pedophilia associated with Joe Biden—all areas of concern among different constituencies in the Latino community.

A New York Times column pointing out these falsehoods won’t work. Rather, it is smaller-scale campaigns by digital platforms, local media, and Florida residents that are needed. Just as political ads and outreach are targeted with a local audience in mind, misinformation catch-and-kill campaigns need to do the same. This would involve identifying location-specific “dog whistles” and media channels, identifying local influencers and posters who are trusted and can lend credibility to any fact-checking and de-bunking campaigns, along with localized ads and community-level dialogues.

Of course, Florida isn’t the only case. To understand where anti-misinformation campaigns are most needed, Tufts Fletcher School, where I am the Dean of Global Business, put together a Misinformation Vulnerability Index that identifies the states most at risk. The index was built by incorporating data on several crucial factors that, according to numerous research studies, contribute to misinformation vulnerability. These include status as a swing state, education levels, age, political polarization, left or right leaning news viewership, and trust in news drawn from social media, among others. All 50 states and the District of Columbia were analyzed using these criteria.

In the resulting categorization, there is a strong correlation between states that are highly vulnerable to misinformation based on factors such as age, polarization, and trust in news and the states that voted for Donald Trump in 2016. Ahead of this year’s election, the most vulnerable states again skew in favor of Trump in the coming election.

It is also the case that states with highly contested elections are more likely to be vulnerable to misinformation. Of the top 25 most vulnerable states, 12 have a highly contested race—presidential or Senate—coming up. In the 25 least vulnerable states and the District of Columbia, there are seven highly contested races ahead.

In an ideal world, social media companies, journalists, and local organizations would train their efforts on all 12 states that are both highly vulnerable and highly contested. But with time and resources steadily disappearing, it is worth narrowing the area of concern even further to the shortlist of states with high misinformation vulnerability, that are also considered swing or battleground states. Those are: Florida, Maine, Nebraska, Wisconsin, Iowa, Pennsylvania, Ohio, Michigan, and New Hampshire. These are the places where local advocacy groups, monitors, and social media companies need to focus their resources to track, capture and resolve misinformation.

In Ohio, for example, misinformers have targeted the Black community in the hopes of suppressing turnout; perhaps the same creative energies on social media platforms that brought peaceful protestors out into the streets after the killing of George Floyd using geotargeted messaging can be employed to counter such manipulation. And in Wisconsin, memes and Facebook and Twitter posts by Russian agents impersonating Wisconsin residents were used to create social discord and keep voters at home; counter-messaging can be developed with genuine locals who can build better awareness and offer tips for good social media hygiene (e.g. don’t “like” or share posts that seems salacious or outrageous without fact-checking). The same goes for the other states on this list.

In the run-up to next month’s vote, there is far too much focus on what the giant platforms, such as Facebook, can and cannot do for the nation as a whole. There is no time for that anymore. But there is still time to get smart about hunting and correcting misinformation where it is most damaging. Misinformation, like politics, thrives on microtargeting. Anti-misinformation strategists need to learn that lesson now. Today, the very future of U.S. democracy is at stake; how we combat misinformation will, tomorrow, shape the future of truth, lies, and democracies around the world.

Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.