Zuckerberg: We’re in an ‘Arms Race’ With Russia, but AI Will Save Us
Buckle up — the technology won’t be ready for another decade.
Facing hours of sharp questioning in Congress over his company’s privacy practices and its response to Russian election meddling, Facebook CEO Mark Zuckerberg repeatedly argued on Tuesday that his company was engaged in an “arms race” with Moscow’s intelligence agencies. Zuckerberg said artificial intelligence, or AI, represents the best solution to misinformation on Facebook but won’t be ready for another five to 10 years.
In the run-up to the 2016 U.S. presidential election, Russian intelligence operatives used Facebook and other online platforms to spread divisive content generally aimed at boosting President Donald Trump’s electoral chances. On Tuesday, a remorseful Zuckerberg admitted that his company’s response was slow, calling it one of his “greatest regrets.”
Zuckerberg argued that the highly sophisticated Russian approach to spreading its influence online has left his company at a distinct disadvantage.
“There are people in Russia whose job is to exploit our systems,” Zuckerberg said. “This is an arms race. They’re going to keep getting better.”
Facing what is arguably the greatest public relations crisis in his company’s history, Zuckerberg in hours of testimony Tuesday conveyed a message of contrition and a much more aggressive role for the company in determining what content and speech belong on the platform.
“We need to take a more active role policing the ecosystem,” Zuckerberg said.
Prepped for his long-awaited testimony before the Senate Judiciary and Commerce committees by a team of PR professionals — Facebook even hired the powerhouse law firm WilmerHale to coach the founder — Zuckerberg repeatedly returned to the arms race analogy. Zuckerberg’s invocation of an arms race was eclipsed only by the number of times he cited artificial intelligence as the solution.
Artificial intelligence, Zuckerberg said, is the only “scalable way” that the company will be able to prevent hate speech from flourishing on the platform and to detect fraudulent accounts used by Russian intelligence operatives to to spread disinformation.
The challenge for Facebook lies in detecting these accounts quickly, before they are able to push divisive political messages on the platform. In the run-up to the 2016 election, Zuckerberg said his company detected Russian attempts to hack into campaign officials’ accounts, but the company failed to understand the broader, more widespread Russian campaign to spread propaganda and even organize protests through the platform.
Zuckerberg is betting that his company can build algorithms to police content and users in such a way that will keep violent extremism, foreign political manipulation, and hate speech off Facebook. (He’s also beefing up his staff and said Facebook will have 20,000 employees working on security and content moderation by year’s end.)
Facebook has already deployed such technology around several recent elections, including the special Senate election in Alabama and recent elections in Germany and France, Zuckerberg said, adding that the company has seen some success in using the technology to detect fake accounts.
Ahead of the Alabama election, Facebook’s algorithms detected and removed fake accounts from Macedonia attempting to spread misinformation; in France, ahead of the 2017 election, Facebook’s AI tools took down 30,000 fake accounts.
Zuckerberg said his company’s efforts to ban terrorist content from the platform gives the company hope that it can use artificial intelligence to detect and block calls to violence. Some 99 percent of al Qaeda and Islamic State content is blocked before anyone has a chance to see it, Zuckerberg claimed. “We hope to develop more tools like this.” (The Counter Extremism Project, a nonprofit group, argues these figures are overstated and that prominent Islamist extremists remain active on the platform.)
But even as he touted artificial intelligence as the long-term solution for the company’s problems in policing content, Zuckerberg conceded that the technology is far from mature.
Though it claims to have developed AI tools to root out foreign intelligence operatives, Facebook keeps discovering them in its midst. Just last week, the company took down another 70 Facebook accounts, 138 Facebook pages, and 65 Instagram accounts controlled by Russia’s Internet Research Agency (IRA), a baker’s dozen of whose executives and operatives have been indicted by special counsel Robert Mueller for their role in Russia’s campaign to propel Trump into the White House.
“We know that the IRA — and other bad actors seeking to abuse Facebook — are always changing their tactics to hide from our security team,” Facebook Chief Security Officer Alex Stamos wrote in a blog post. “We expect we will find more.”
Pressed by Sen. Kamala Harris, a California Democrat, on whether Facebook has rooted out intelligence operatives from the platform, Zuckerberg said it was unlikely: “I can’t say we’ve identified all of the foreign actors involved here.”
And while artificial intelligence may provide the solution to quickly identifying violent content or accounts belonging to intelligence operatives, Zuckerberg also conceded that the technology has a hard time dealing with some of the more difficult problems facing the company, such as the detection and elimination of hate speech.
“Over a five- to 10-year period, we’ll have AI tools that can get into some of the nuances” of what does and does not constitute hate speech, Zuckerberg said. Understanding those “linguistic nuances” is necessary for artificial intelligence to accurately detect hate speech, but that technology “today is just not there on that.”
Zuckerberg also apologized for his company’s belated response to Russian meddling and for allowing Cambridge Analytica, a political consultancy hired by the Trump campaign and backed by the GOP financier Robert Mercer, to access the data of some 87 million Facebook users.
Facebook, Zuckerberg said, was built as an “idealistic and optimistic” company but said it had failed to consider how a platform meant to connect people could be “used for harm.” From fake news to foreign interference in elections to hate speech and data privacy, Zuckerberg said his company “didn’t take a broad enough view of our responsibility.”
“That was a big mistake,” Zuckerberg said. “It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”
Zuckerberg added that his company has finally shed its early dorm-room slogan. It used to be “move fast and break things,” Zuckerberg said. Now, it’s “move fast with stable infrastructure.”
5Life Inside China’s Social Credit Laboratory 4012 Shares