The future of fake news is messaging apps, not social media. And it’s going to be even worse.
- By Nic DiasNicholas Dias is a senior research fellow at First Draft News, a nonprofit research organization devoted to supporting truth and trust on the internet. His research foci include the use of bots to gain disproportionate voice on social networks.
In some places, the future of misinformation is already here.
A hoax about child-napping con artists led to the beating of two people this spring in Brazil. A rumor about a salt shortage last fall sparked panicked rushes to markets in several Indian states that turned fatal. And fabricated poll reports sowed doubts about the electoral standing of candidates ahead this month’s elections in Kenya, where the result is disputed and dozens have been killed in protests.
When fake news has violent consequences, journalists have a duty to set the record straight as quickly as possible. But the details of these rumors — who was behind them and why — are particularly murky and likely to remain that way. That’s due to one seemingly trivial detail: In all of these cases, the misinformation made its way to readers via the messaging service WhatsApp.
Closed messaging apps like WhatsApp and Viber continue to grow in popularity worldwide. And as the popularity of Facebook and Twitter as news sources shows signs of stagnating or declining around the world, messaging platforms are increasingly becoming a means through which users learn about the wider world. A recent YouGov survey of over 70,000 people in 36 countries found that 23 percent of respondents “find, share, or discuss” news using at least one messaging service. In Asian and Latin American countries like Malaysia or Brazil, that number is closer to 50 percent, and WhatsApp is almost as common a source of news as Facebook.
Messaging platforms have yet to provoke much discussion among misinformation and disinformation researchers (myself included) in the West, who have been trying to devise best practices for responding to viral rumor and disinformation campaigns. But these simple apps deserve attention as the dark future of misinformation and disinformation.
Unlike Twitter or Reddit, messaging apps are not designed to be public squares where users can mingle with millions of strangers. They began as cheap, data-lean alternatives to SMS texting or as ways to send private, encrypted messages.
Most of these apps restrict users to one-on-one chats with contacts in their phones or to private group chats with no more than 500 friends of friends. While a conversation with hundreds of participants certainly doesn’t feel too private, these groups chats are still closed in the sense that everyone in them must be invited by an existing member, and there’s no way to know whether a group exists unless you’re a part of it. Furthermore, with a few exceptions, there are no trending lists or social feeds providing input from outside a user’s network. Some mobile messaging companies have recognized the potential for their apps to deliver creative or editorial content, offering features through which users can subscribe to one-way chats with publishers. These are not a public forum, though; users can like messages and see how many each has been viewed, but only the publisher can post messages to subscribers.
In short, barring a few exceptions, all activity on these platforms that exists outside one’s immediate network is completely invisible. On apps with end-to-end encryption — like WhatsApp, Telegram, or Viber — ostensibly not even the platforms themselves can always see what’s being discussed by users. It’s for this reason that some who have been paying attention to messaging platforms call them “dark social.”
The obscurity of messaging apps poses obvious problems for journalists trying to quickly find and debunk falsehoods on these platforms. To begin with, it’s harder for journalists or others trying to combat misinformation to identify just what is circulating on these platforms in first place. But even when a rumor has been pinpointed, it’s harder to take the first and necessary step in the fact-checking process of identifying the original source of a piece of content. Hoaxes on messaging services often don’t come with citations or hyperlink; rather, they’re commonly standalone media or blocks of text, sometimes attributed to official sources. (“The next [Richter] scale of earthquake will be 8.2. News From NASA. Plz forward the message as much as u can” is one typical example from India.) Unattributed or falsely attributed images, videos or text can be searched on Google. However, where the original instance of the content cannot be found by Google’s web crawlers — like in cases where the content originated on the messaging platform itself or has been edited — journalists are left at a dead end.
These apps also have features that complicate matters for anyone looking to spread false information. It’s harder for actors to synthetically boost their message as they have, say, using bots on Twitter. To send someone a message on these platforms, you must have their mobile number stored in your phone or at least know their exact username. The prominent messaging apps also require users to sign up with a valid cell phone number, verified via a text message or call, in order to access their phone’s contacts to send messages.
To be sure, circulators of disinformation could easily buy a list of phone numbers or scrape online telephone directories, and there are ways for the highly motivated to bulk purchase internet phone numbers or SIM cards, as well as ways to automate group and message creation. However, most messaging services enable users to flag spammers. In addition, WhatsApp and Viber have announced spam-detection measures that supposedly prevent accounts from sending too many unwanted messages.
The more likely way for malicious actors to engineer virality on a messaging platform would be to simply coordinate with like-minded others who are already using these apps and have cultivated large networks. Similar tactics are reportedly being used by the Indian People’s Party (BJP), which is preparing for the 2018 election by training 100 volunteers to distribute messages via at least 5,000 WhatsApp groups. To be clear, this is not a suggestion that the BJP is using these methods to spread disinformation — but it’s easy to see how those with nefarious intentions could use these tactics for their own ends.
It seems likely that, absent involvement from mobile messaging companies themselves, the immediate fight against hoax and propaganda on their platforms will involve crowdsourcing. And indeed, creative uses of crowdsourcing to get around the barriers of messaging apps have already begun to emerge in countries awash with WhatsApp hoaxes. As reported by the Nieman Journalism Lab, Colombian political news site La Silla Vacía has begun encouraging their readers to submit screenshots of the WhatsApp messages they suspect to be hoaxes. Then, after fact-checking a hoax, they request that its submitter share another screenshot showing they’ve forwarded the fact-check to their contacts, thereby targeting the social circles from which the hoax spread. WhatsApp tips are similarly being accepted by fact-checking groups in India and Brazil like BoomLive and Boatos.
But fact-checking, by its very nature, will always be one step behind misinformation and disinformation. In addition, journalists must utilize every proactive option available. That means educating the public on how to question and verify online content through new media literacy programs, and replenishing the deficit of trust in journalism that creates an appetite for unverified reports in the first place and thwarts any attempt at their correction.
Both will require a daunting commitment of time and resources. But the future of misinformation and disinformation is coming, and we need to begin preparing now.
Image credit: Justin Sullivan/Getty Images