Why Washington and Silicon Valley must work together to truly understand the world.
- By Kalev LeetaruKalev H. Leetaru is the Yahoo! Fellow in Residence at the Institute for the Study of Diplomacy in the Edmund A. Walsh School of Foreign Service at Georgetown University. His work centers on the application of "big data" towards understanding global human society in new ways and he is the creator of the GDELT Project.
One of the most striking revelations of the Edward Snowden disclosures has been the single-minded focus of the U.S. intelligence community on collection: on hoovering up all global communications, but with the concept of analysis — of what to actually do with all those communications — relegated to an afterthought.
Gleeful bragging and schoolboy taunts of hacking coups permeate the disclosed documents, but discussion of how the captured data can actually be used is far more mundane and subdued. The thrill clearly lies in the chase of new data, rather than in the actual analysis of the material obtained. In fact, an image emerges of a community struggling merely to read its highest-priority intercepts, let alone assess their contents.
An exchange among analysts monitoring the Lashkar-e-Taiba terrorist group notes, "Most of [the intercepted communications traffic] is in Arabic or Farsi, so I can’t make much of it." In an era when web browsers instantly translate over 80 languages, it seems absurd that the analysts tracking one of the largest terrorist organizations in South Asia can’t even read the language. The prioritization of collection has led us to a point where the United States can acquire a terrorist group’s internal communications at will, but this becomes for naught when the analysts don’t know what those communications say.
While the National Security Agency’s surveillance techniques may awe in their technological sophistication, they are in reality nothing more than a modernization of age-old collection tradecraft for the computer era. Hacking into computers and downloading files is arguably no different from breaking into an office and photocopying papers. Maintaining lists of software vulnerabilities is no different from curating bypasses for alarm systems and safes. And intercepting computers in transit to install keystroke tracers is little different from hijacking furniture shipments to install listening devices.
Even the most controversial of the National Security Agency (NSA) programs, the bulk collection of phone metadata, traces its roots back nearly 70 years to an earlier NSA effort, Project SHAMROCK, which monitored all international telegraphs crossing the borders of the United States (which in turn was based on the post-World War I "Black Chamber"). Lasting nearly 30 years, SHAMROCK collected over 150,000 telegrams per month at its peak, and public discovery of the operation led to the creation of the FISA court system that is now central to the modern surveillance apparatus. Even the frenzied hyperbole was the same, with Sen. Frank Church, chairman of the Senate Committee to Study Governmental Operations With Respect to Intelligence Activities, condemning the program as "probably the largest government interception program affecting Americans ever undertaken."
The problem is that this modernization of collection tradecraft has not been matched with a modernization of analysis tradecraft. The intelligence community has become so enamored with its collection gadgets and technology that it has forgotten that the true purpose of intelligence is not merely to collect, but to actually create intelligence from all that collected material, which requires interpretation and analysis. In essence, with a single-minded focus on chasing new information, James Bond has become Melvil Dewey, simply collecting and archiving the world’s information. This has not gone unnoticed, as Josh Kerbel, chief analytic methodologist for the U.S. Defense Intelligence Agency, has written extensively on the community’s "inability to think big" and its need to "better balance its traditional focus on entities with a greater appreciation for phenomena … [to] recognize the futility in an excessively insular and closed organization trying to understand and make sense of an open, complex world."
The world is a far more complex place than it was half a century ago at the founding of the American intelligence apparatus, when it was just the Allies versus a handful of villainous nation-states. The Hollywood image of a lone spy in a telephone exchange closet with a pair of headphones and a notepad jotting down Vladimir Putin’s secret plans for eastern Ukraine and relaying them back to Washington simply isn’t possible in a world where there are more targets to monitor than spies to watch them. In today’s data-drenched world, it is increasingly subnational actors with fluid geographies that threaten America’s interests, and Twitter alone publishes more words in a single day than were printed in the entire New York Times over the past half-century. A private individual can post a video to YouTube that wracks an entire region and unleashes a torrent of anti-American sentiment faster than analysts even realize there is a problem. In short, today’s world moves too fast and involves too many actors for humans sitting at their desks to actually understand the flood of information that our modernized data plumbing is delivering to our doorstep.
Why is it that collection tradecraft has modernized to keep pace with technological innovation, while analysis has advanced little from the human-driven qualitative methodology of half a century ago? One likely answer is that when it comes to collection, the intelligence community has stepped back and allowed technologists to lead the way, while analysis has been kept firmly within the death grip of those who view computers as a threat to their job security. Even as a new generation of data-savvy analysts increases demand for technological innovation, management often views such tools with suspicion, afraid they will displace and shrink their human-centric empires, rather than seeing the potential of the tools to refocus analysts on actual analysis.
The increasing priority given to information collection has generated a need for translation across disciplines — to bring together the technical, analytic, and policymaking communities to work together as a team, rather than as separate enterprises. Silicon Valley produces a wealth of new tools each day to store, manage, visualize, and analyze the big-data universe, but it approaches the data revolution purely through the lens of technological innovation. Analysts too often view data tools as a threat to their jobs, fixate on those tools’ limitations, don’t possess the technical skills to use them, or fall victim to marketing hype, using tools and data blindly without understanding how to properly apply them. Policymakers just want answers, but often ask questions that are not aligned well with today’s technological capabilities and view data’s inability to answer their questions as proof of its immaturity.
What we need is to increase Washington’s "data literacy" and Silicon Valley’s "application literacy" to bring the two worlds together in the pursuit of data-driven intelligence and policymaking.
I sat through a presentation last year featuring a sequence of visually stunning maps and charts depicting changing Syrian rebel views toward the Syrian regime. Yet, not a single visualization contained a mention of source data, analysis algorithms, or the confidence interval of the data other than a mention that they were based on "billions of observations."
When I raised my hand and asked how street-level data on rebel views had been assessed using open sources, the respon
se was that the Twitter fire hose (containing "billions" of tweets) had been scanned for English-language tweets originating in Syria, resulting in a few hundred tweets, and that the software used to codify the tweets for mapping specifically warned that it would generate invalid results on social media data. When I noted that I thought the rebels were using Arabic-language Facebook posts, the answer was that Twitter data was easier to access and that no one on the team spoke Arabic and that they selected the software because it was easy to use. Thus, what started as a sequence of alluring visualizations touting street-level results and based on "billions of observations" became a handful of English-language tweets in a country where few speak English and Facebook is the primary communications platform of the rebels.
The researchers behind these maps did not mean to mislead. They simply suffer from "shiny new object" syndrome — well-meaning technologists grab data sets and off-the-shelf tools and create technological breakthroughs that often bear no resemblance to ground reality. Had this been a traditional intelligence assessment, it would have come with a detailed assessment of the underlying sources, assumptions, analyses, and possible caveats of the results, and much of the conversation would have focused on how likely the data reflected the overall rebel population. Instead, the lack of "application literacy" among technologists means that data-driven analyses are often based on whatever data and tools are easiest to obtain and use. The concept of how closely those data sets or algorithms reflect reality never comes into play, as the computer science world centers on software rather than solutions.
If intelligence analysts and policymakers understood more about data, they would be able to see past the shiny animated maps and ask the hard questions about what lies underneath. In doing so they might become more confident in the results and more willing to allow data-driven analyses to play devil’s advocate. A number of Wall Street forecasts correctly predicted the 2008 financial collapse, but were dismissed by human analysts who trusted their own gut feelings over data. Similarly, forecasts of the 2011 Egyptian revolution were crippled by subject matter experts who argued that Mubarak could never fall, leading to computer forecasting models being built only to forecast what would happen if he stayed in power. The megastore Target deployed highly sophisticated computer-intrusion software that correctly detected its credit card breach, but whose alerts were dismissed when they conflicted with human intuition.
Nearly half a century of investment in data-driven conflict forecasting has yet to lead to a system that can forecast the Arab Spring or the Russian annexing of Crimea. Instead, the political science community touts computer models with 80 percent accuracy because they correctly predict each day that Syria will have conflict the following day, or announce they correctly predicted Venezuela’s unrest after two weeks of New York Times headlines chronicling the spiraling violence. If one predicts that no country will undergo a coup at any point during the coming year, that’s technically 196 countries times 365 days = 71,540 predictions. If even 10 countries go under this year, that’s still 99.986 percent accuracy. To a data-driven political scientist 80 percent accuracy may be viewed as a major technical achievement, but to a policymaker, being told Venezuela may have problems two weeks after it has already unraveled is not predictive and it undermines their trust in what data is capable of. Increasing application literacy would help the quantitative political science community begin to bridge the gap between technical achievement and useful results, while increasing data literacy would help policymakers better communicate to them what constitutes a useful and actionable result.
In short, we need to do a better job of getting Silicon Valley to better understand the applications of its technology, while we need to do better at helping policymakers understand what actually goes into the pretty pictures they get back from all of these tools. Perhaps the intelligence community should take a page from the playbook of its favorite hero, James Bond. After 50 years of devolving from intellect and action to a two-hour technology show of gadgets, the Bond franchise was rebooted. To the modern Bond, gadgets are simply tools to help him achieve his needs, with the focus on analysis, assessment, and action. To stay relevant in an ever-evolving and complex world, the U.S. intelligence community must bring Silicon Valley and Washington together to help it "reboot" itself to refocus on understanding the world, and in doing so, become less like Melvil Dewey and more like James Bond again.
Shane Harris is a senior staff writer at Foreign Policy, covering intelligence and cyber security. He is the author of The Watchers: The Rise of America's Surveillance State, which chronicles the creation of a vast national security apparatus and the rise of surveillance in America. The Watchers won the New York Public Library’s Helen Bernstein Book Award for Excellence in Journalism, and the Economist named it one of the best books of 2010. Shane is the winner of the Gerald R. Ford Prize for Distinguished Reporting on National Defense. He has four times been named a finalist for the Livingston Awards for Young Journalists, which honor the best journalists in America under the age of 35. Prior to joining Foreign Policy, he was the senior writer for The Washingtonian and a staff correspondent at National Journal.| Report |