How important is representative data in human rights work?
- By Allison Good<p> Allison Good is an editorial researcher at Foreign Policy. </p>
There is an urgency to humanitarian response that Tina Rosenberg (“The Body Counter,” March/April 2012) and her subject, Patrick Ball, do not seem to appreciate. Representative numbers are important in the human rights field, but only to the extent that they actually improve people’s lives. The U.S. Marine Corps used the SMS data mapped on the Ushahidi-Haiti platform to save hundreds of lives after the 2010 earthquake. Something is wrong when self-styled human rights defenders attack lifesaving volunteer work.
Ball does not typically work with representative samples. He simply applies methods that assume he has nicely behaved random samples. Validation studies are needed to demonstrate that a technique can perform well despite substantial real-world violations of its assumptions. Validation works by applying the technique to a case with an already known answer to test whether one gets the right answer. There are, unfortunately, few validation studies in Ball’s area of work. Many of his studies therefore ride entirely on assumptions.
In the case of Haiti, we actually have a strong validation study. This research was produced independently by the European Commission’s Joint Research Center and survived a peer-review process, which is how scientific work is validated. The Ushahidi data provided a truer guide to the damage in Haiti than Ball’s alternative — a map of buildings — would have. (Note that no such map existed at the time anyway.)
Ball and Rosenberg also appear to be confused about the Ushahidi platform, which is simply an information collection and visualization tool that is equally usable for either representative or nonrepresentative data. Columbia University researchers Macartan Humphreys and Peter van der Windt, for example, used Ushahidi to collect and visualize representative cell-phone data in their Voix des Kivus project in eastern Congo.
Finally, Rosenberg states that I “ultimately retreated to a narrower set of claims” after defending the European Commission’s analysis. Absolutely not. I fully stand by my original arguments.
Director of Crisis Mapping, Ushahidi
Tina Rosenberg replies:
Patrick Meier is incorrect in thinking I am attacking Ushahidi’s lifesaving work. The information Ushahidi collects is invaluable for first responders during times of man-made or natural disaster. The question is whether Ushahidi can go beyond this core mission to map a disaster accurately.
Meier contends that the Ushahidi platform can be used to collect and visualize representative cell-phone data. But it is almost never used this way, and the Voix des Kivus project he cites shows why. It tries to create a representative sample by pre-positioning cell phones and pre-training villagers who are selected because they are members of different groups.
Most disasters, of course, do not wait for this kind of preparation. In the vast majority of cases, then, Ushahidi’s data can pinpoint reports of violence or destruction, but cannot reliably describe their pattern or multitude.
As Tina Rosenberg’s profile of statistician Patrick Ball details, measuring suffering is a complex endeavor. And the International Rescue Committee (IRC) agrees with Ball that practitioners must use the best available data and scientific methods in obtaining numbers. We disagree, however, with his suggestion that the IRC erred in its estimation that 5.4 million people died from conflict-related causes in the Democratic Republic of the Congo between 1998 and 2007.
Over a seven-year period, the IRC partnered with leading epidemiologists to conduct five mortality surveys in Congo. To estimate the number of war-related deaths, our experts needed to know the prewar mortality rate. The best source available was Congo’s official crude mortality rate of 1.3 deaths per 1,000 people per month, based on the country’s most recent national census.
Ball is quoted as saying that the prewar rate we selected was “far too low,” leading to a higher excess death estimate. In fact, to be conservative in our calculations, the IRC deliberately used a baseline rate for sub-Saharan Africa that was 15 percent higher than Congo’s official rate and 20 percent higher than that used by UNICEF. Using this higher rate resulted in a lower estimate of conflict-related deaths.
We also disagree with Ball’s claim that a “correct” baseline estimate would have resulted in “an excess death figure that is only one-third or one-fourth as high.” To arrive at such a low figure, one would need to assume that the prewar baseline mortality rate for Congo was 2.85 deaths per 1,000 people per month — more than double the rate used by Congo’s government and UNICEF and higher than any rate ever reported for an African country. This is simply not a plausible baseline figure.
Prior to the IRC surveys, the world made wild guesses about Congo’s war-related mortality. Today, there is hard evidence that millions died. Like Ball, we believe that “measurement matters.”
Senior Health Director, International Rescue Committee|
New York, N.Y.
Tina Rosenberg replies:
In its Congo studies, the International Rescue Committee (IRC) compared the death rate during the war with a prewar baseline death rate and then calculated the excess. Obviously, if the baseline death rate is too low, then the excess will be too high.
What was the IRC’s baseline? The average death rate in sub-Saharan Africa. But by any measure, conditions in the Democratic Republic of the Congo — even before the war — were worse than those almost anywhere else on the continent. The death rate in Congo was likely much higher than the African average.
While the IRC’s figures have been widely cited in the media, they are controversial among demographers. One study by two Belgian demographers using a higher baseline came up with an estimate of 200,000 excess deaths due to the war between 1998 and 2004. For the same period, the IRC estimated 3.9 million excess deaths — 20 times as many.