Why sleaze is so hard to calculate.
I have a recurring dream (or perhaps nightmare) about today’s practices in social science research. In the dream, a conservative elder gentleman — selected in a process of great complexity resting on decades of research on sampling — is subjected to questioning by a highly professional interviewer from a polling firm of great repute. The interviewer is not of the same social class, generation, gender, or ethnic origin as the respondent, reducing the possibility of an empathetic connection. In the course of faithfully administering a questionnaire whose intricate structure has taken hundreds of hours to construct, the interviewer nonchalantly reads a question: "Have you ever committed heinous acts of bestiality? Please answer yes or no." The respondent sputters a "No." The interviewer thanks the respondent profusely for his candor. Later a researcher applies the highest-powered econometric techniques to analyze the data.
Despite the success of using survey data in my years of research, the above daydream reveals a persistent worry from which I cannot shake myself free. Economists, in general, extensively use data about embarrassing, immoral, or illegal acts obtained by directly questioning possible perpetrators of those acts. Highly reputable organizations such as the World Bank and Transparency International publicize country corruption scores based on self-reports of bribes paid. Development economics relies on surveys that touch upon issues like adherence to unpopular political opinions or participation in corruption. But all of these activities involve great dissonance between the sophistication of the statistical methodology employed and the naiveté implicit in assuming that those dishonest enough to bribe will be endearingly honest in answering the man or woman behind the clipboard.
Even in matters of the mundane, people lie.
We know that survey respondents are often not candid when responding to questions that bear on how others view them, or indeed on how they view themselves. In one of the more amusing examples, men systematically report a greater number of opposite-sex sexual partners than women do, even though simple mathematics tells us that the average values for male and female respondents must be equal. Even in matters of the mundane, people lie. In a classic study, 19 percent of survey respondents in Chicago were found to incorrectly claim possession of a library card.
I have been interested in how much this lack of candor affects data on corruption for some time. In early work, Omar Azfar, who sadly passed away four years ago, and I developed a technique, similar to the famous line from Hamlet, to detect which particular people were reticent to truthfully answering questions on corruption. The exact details are arcane, but the method effectively relied on cornering respondents so that if they were to remove even the remotest implication of guilt from their answers, they would also be claiming that a coin tossed seven times always came up tails. Since elementary probability theory shows that the chance of getting seven tails in a row is very tiny, a set of answers protesting too much the respondent’s innocence necessarily implies that the respondent has not told the truth!
Managers of Romanian firms were the unfortunate subject of our experiment. Our results allowed us to identify 10 percent of respondents as reticent for sure, and our best estimate was that a further 32 percent were reticent but not identifiable by our methods. Corruption estimates were raised by 33 percent when we took reticence into account. Those identified as unwilling to tell the truth were only half as willing as other respondents to admit to lying in their own interest!
The methodology used in Romania was a crude test of an idea and therefore could be greatly improved upon. I have accomplished this in recent work with Aart Kraay of the World Bank. Again the methods are too arcane to describe here, but they depend upon the fact that being reticent implies different distortions in the responses to two different types of survey questions. The two types of questions are conventional ones — have you paid a bribe? — and random response questions — answer yes if either your coin-toss came up heads or if you have paid a bribe. Although random response questions were originally proposed as a means of encouraging candor, in fact they do not do that well at all. Instead, they induce different patterns of responses than conventional questions. We were able to mesh together the two different response patterns and estimate two different characteristics of respondents — guilt on matters of bribery and reticence in answering survey questions. (With only one type of question, it is impossible to estimate two different characteristics. And that is the problem inherent in all existing surveys on sensitive topics.)
Earning our undying gratitude, a World Bank team included our questions in a survey of Peruvian firms and the Gallup Organization fielded them in 10 Asian countries in its World Poll. Using conservative assumptions, we found that respondents in Peru answered conventional questions on corruption candidly only 50 percent of the time. Adjustment for this reticence doubled the estimate of the incidence of bribe-paying. With less conservative but still reasonable assumptions, the estimate of bribe-paying was triple the standard one. Across the Asian countries, the proportion of conventional questions answered candidly varied from a high of 79 percent in Indonesia to a low of 53 percent in India, meaning our estimates of corruption were 25 percent higher than standard estimates in Indonesia and 100 percent higher in India.
Our estimates of corruption were 25 percent higher than standard estimates in Indonesia and 100 percent higher in India.
Organizations that produce country data on such compelling subjects as corruption are fond of producing rankings, and the media is quick to convert plain vanilla estimates into startling comparisons. But our research shows that such comparisons might be off the mark because different samples of respondents have different propensities for reticence. For example, in Peru, conventional measurement indicates that very small firms are three times as likely to pay bribes as large firms, but this ratio increased to six-fold when using our new methods. Bribe-paying in the region of Arequipa looked quite similar to that in Lima until we applied our methods and then found three times as much in Arequipa as in Lima. The situation is similar for cross-country comparisons, with our methods making India look much worse and Indonesia much better, comparatively speaking.
Much rests on estimates of corruption — for example, aid decisions by the World Bank, the Millennium Challenge Corporation, and USAID. Research on how to build policies and institutions to combat corruption is dependent on corruption data obtained from surveys. These are vital matters for economic development and they could be accomplished much more effectively if data were available that was free from the biases caused by the reticence of survey respondents. Perhaps researchers have given short shrift to this problem because they have downplayed the effect of reticence. Judging by our latest results these researchers have — to borrow George W. Bush’s evocative neologism — misunderestimated the problem.