Satellite Surveillance Can Trace Atrocities but Not Stop Them
George Clooney’s pioneering data project documented horrors in Sudan, but that wasn’t enough.
When I took an NGO job in Darfur in 2006, the ongoing atrocities there were well known enough that even my least internationally minded relatives asked me whether I was sure I wanted to go. Darfur had become a cause célèbre for figures like George Clooney, who helped bring obscure Sudanese terms like janjaweed into the mainstream U.S. consciousness.
But for all the effort and publicity, the United States, Europe, and the international community did little to stop the mass killings in Darfur. From my new position inside the country, it seemed reasonable that the disconnect might have to do with lack of evidence. The government was denying wrongdoing and even involvement. The International Criminal Court (ICC) was investigating the situation, very slowly. NGOs were debating the value of contributing to advocacy at the risk of being kicked out of the country and not being able to provide humanitarian aid. At the time, the relatively new idea of using satellite surveillance to not only document the attacks but also identify the perpetrators would have seemed like the perfect solution to make the horror stop.
But now it’s 2020, and skepticism about surveillance and technology is the norm—especially in the intersection of military intelligence and humanitarian aid. So when I saw a tweet making the rounds—to the tune of 30,000 retweets—about Clooney spending his hard-earned Nespresso dollars on a satellite to track Sudanese President Omar al-Bashir (now very deservedly deposed and an indicted war criminal), I was curious to get a version of the story a little less starry-eyed than the gushy 2013 Huffington Post article screenshotted in the tweet—and to see what role this kind of outside surveillance had actually played. In an age of ubiquitous cameras and big data, it turns out, documentation might be easy, but political action often remains as out of reach as ever.
I reached out to Nathaniel Raymond, the director of operations of the initiative mentioned in those articles, the Satellite Sentinel Project (SSP). A collaboration among a number of organizations and housed at the Harvard Humanitarian Initiative, SSP was largely funded by Not on Our Watch, the organization started by Clooney along with some of his Ocean’s Eleven co-stars. SSP has now closed, and the Clooneys have shifted focus to the Sentry project, which follows money rather than armed movement in satellite photos, but Raymond believes that in today’s data-intensive world, the work of SSP is more relevant than ever.
SSP went a step further than previous efforts to document mass killings, seeking to identify the indicators needed to predict them so that information could be shared before they happened. As Raymond told me by phone, “We went into SSP believing we could standardize the observable patterns that would happen in certain kinds of atrocities and create a new forensics.” This is possible because, as Raymond explained, “there’s a logistical ground pattern required to kill a lot of people.” It was a chilling reminder of just how systematic such atrocities are. And in today’s world, the prepositioning of troops and equipment necessary for a massacre is not only predictable; it’s also “entirely visible from space.”
SSP was largely successful in its predictive goals. The Harvard Humanitarian Initiative’s report on the pilot phase of the project makes for grim but impressive reading about large-scale violence that was predicted before it happened, recorded in almost real time as it occurred, and further documented as the perpetrators, to varying degrees, attempted to conceal it. The analysis was accurate and prescient enough that the report quotes Rebecca Hamilton, a former special correspondent for the Washington Post in Sudan and a fellow at the Pulitzer Center on Crisis Reporting, as calling the attack on Abyei “perhaps the most clearly forecast crisis in history.”
But if the complex alignment of targeted tasking of satellites and expertise-based human analysis of data was successful, the impact of the project was not what its founders had hoped for. Raymond said one of the learnings from SSP was that “documentation is no substitute for political will.” In one case—the attack on Kadugli in 2011—the documentation did force the U.S. government to admit there were grounds for investigation, but the groundbreaking work of SSP led to very little change in the humanitarian community’s response to the documented, and even predicted, horrors.
In retrospect, it seems almost naive to imagine it would. But in 2010 Bashir had just been indicted by the ICC, and Responsibility to Protect (R2P), a doctrine urging accountability for the violence of states against their own citizens, had just garnered a U.N. secretary-general report. Governments and the international community claimed that they needed evidence to act; it made sense to provide that evidence.
There was another factor in that optimism as well, one that sounds very familiar today amid tech buzzwords thrown around in the promise of transformative initiatives. In a 2017 paper in Genocide Studies and Prevention, Raymond and co-author Kristin Bergtora Sandvik call this “technology optimism,” an often implicit belief that the use of information and communication technologies has “an inherently Ambient Protective Effect (APE); i.e. casually transforming the threat matrix of a particular atrocity producing environment in a way that improves the human security status of targeted populations.” As with surveillance cameras in public areas, there is an assumption among some sectors of the population that they will make the situation better by their mere existence, that the act of surveilling itself will prevent bad things from happening. In fact, the reverse can happen. In a 2016 dissertation paper studying Amnesty International’s Eyes on Darfur project, Grant Gordon found that “Amnesty’s advocacy effort was associated with between a 15 and 20 percentage point increase in violence in monitored areas.”
Faced with lack of political will to act on the data it provided, SSP adjusted its theory of change. “Not doing it publicly was a key lesson,” Raymond said. “It was a decision support tool, not an advocacy tool.” The analysis worked—so successfully that SSP has since taught its predictive techniques to groups such as the World Food Program—but it didn’t galvanize either government or public action the way the project had hoped.
The success of the program also led to another insight, one that Raymond believes is even more crucial. As the team realized that they could accomplish what they set out to accomplish—a new form of data that was predictive—they also started to understand that they were working without an ethical net. The idea of primum non nocere, or first, do no harm, is commonly associated with medicine, but it is also the basis for an important humanitarian principle, do no harm. However, as Raymond said, “First you must know the harm before you cannot do the harm.” The kind of work SSP was doing was so far outside of the existing strands of information ethics—primarily focused on individual privacy on the one hand and the limits established by the Nuremberg tribunals on violation of agency on the other—that it had no framework yet for figuring out where the limits were.
“It was seen as beneficent, but we had no standard for nonmaleficence,” Raymond said. “There was no tort standard by which they could conceivably sue us. What we did there for good, anyone else could do for any reason.”
This is a quintessential modern quandary. As data technology is racing ahead of regulation and common awareness, governments, companies, and individuals lack the ethical tools needed to decide where the lines are, much less how to hold transgressors accountable. Individuals have enough trouble figuring out what Facebook or Google or Apple has done with its data; deciding whether those actions are merely irresponsible or negligent or actively wrong is a problem that we don’t have any framework for solving.
Many of SSP’s former staff have turned their attention to these ethical questions. The article Raymond wrote with Sandvik, for example, proposes a theory of harm for the use of information communication technologies in mass atrocity response. But, like the urgent data that SSP collected, it is likely to take these ethical innovations some time to have an impact on the organizations and people for whom they are most important.
The Harvard Humanitarian Initiative report cites Clooney himself asking, at the beginning of the project, “why any person could use Google Earth to see his house in Los Angeles but could not see images of the deteriorating situation in the Sudan border region.” In a 2013 Guardian article, Clooney commented on his “keeping an eye on” Bashir: “Then he puts out a statement saying that I’m spying on him and how would I like it if a camera was following me everywhere I went and I go, ‘Well, welcome to my life, Mr. War Criminal.’ I want the war criminal to have the same amount of attention that I get. I think that’s fair.” As a movie star, Clooney was intimately aware of the potential of surveillance. The problem was not the lack of technology. The problem was, and continues to be, where the West decides to put its attention.