Big Brother Turns Its Eye on Refugees

Biometrics have crept into humanitarian aid, but the systems may disadvantage women who need help most.

An employee at the United Nations High Commissioner for Refugees scans the eye of an Afghan refugee at the UNHCR registration center in Peshawar, Pakistan on June 23, 2016.
An employee at the United Nations High Commissioner for Refugees scans the eye of an Afghan refugee at the UNHCR registration center in Peshawar, Pakistan on June 23, 2016. A Majeed / AFP/ Getty Images

Perhaps without us realizing it, biometrics have crept into many corners of our lives. The prevalence of biometrics—the use of physical characteristics to determine and verify identity—facilitates our access to the world. Be it fingerprinting, iris scans, or facial recognition, biometrics has streamlined identification, making it easier and easier to add ID processes to interactions where they were missing before. Biometric usage has even become standard practice in the humanitarian aid industry.

By adopting biometric registration systems, aid agencies efficiently provide refugees with an official identity, prevent fraud, and improve the dignity of the refugee aid process. Yet despite those benefits, biometrics also threaten the security of refugees, especially women, in three ways: through greater risk for false matches; by increasing the potential for discrimination; and by threatening exploitation. The first step in correcting these problems is recognizing them.

Biometrics have successfully provided a universal identity to refugees, many of whom lost or destroyed identifying documents during their flight, or never possessed any to begin with. Approximately 1 billion people globally lack identification; women are disproportionately represented. Providing documentation results in greater access to aid services, employment, and personal empowerment. Through fingerprinting and iris scans, many aid agencies now register refugees as they come through camps, allowing them to confirm their information without need for prior documentation. Such a system can help prevent benefit fraud, whether from double-receipt of aid or by organizers who take aid away from the populations in need. Biometric systems thus offer important oversight in the refugee aid industry.

Nevertheless, the history and implementation of biometrics raises some concerns. Its usage has typically been an imperative for donor states rather than the aid agencies themselves. As the researcher Katja Jacobsen explained in The Politics of Humanitarian Technology, large donors, like the United States, make their funding dependent on the implementation of biometrics in the refugee registration process. However, the interests of these donors are driven by security rather than aid, and governance of refugees’ biometric data remains ambiguous: Which states own the data—those which host the refugees, or those who provide the funds? Who is in charge of data protection? Which companies provide the biometric systems to aid agencies and what is their level of control over the data?

Beyond those questions, there is the risk of false matches. Biometric technology occasionally incorrectly registers a refugee as already in the system. According to Jacobsen, previous testing of biometric systems was small-scale, operating on “a few hundred entries to a database” with relative precision. Yet it is unclear how well it works when the databases include millions of registered individuals. This concern is doubled if the biometric technology itself is of poor quality or if the type of information requested by donor states emphasizes security concerns rather than an accurate distribution of humanitarian aid.

Although false matching could harm all refugees, women in particular could suffer. They are often already marginalized in their communities, and face issues in securing aid. If falsely matched, they could face yet more hurdles to securing cash transfers, owning property, and accessing employment opportunities in a dignified way. A test of American faces conducted by the National Institute of Standards and Technology shows that women were less likely to be recognized by facial recognition systems than men. This is doubly true for women of color, as shown by the same test in which “the highest false positives are in American Indians, with elevated rates in African American and Asian populations.”

Further, as Lindsey Kingston explains in the 2018 book, Digital Lifeline, women, especially single mothers, may serve as heads of households and primary caregivers, duties which may prevent them from claiming aid in person. Beyond that, Kingston explains, women are most likely to face injuries such as cooking burns, which could disrupt fingerprint identification required to receive assistance.

Another problem is data usage. As more humanitarian aid agencies adopt biometric technology, a “dictatorship of no alternatives,” a phrase coined by Shoshana Zuboff in her 2018 book, The Age of Surveillance Capitalism, is emerging. Refugees have little say in the usage of their personal data. It is hard to say that refugees have provided real consent when they turn over biometric data because there aren’t other good choices. That’s especially true of women, who may face norms preventing them from speaking out. In fact, systems of biometric registration continue to be discriminatory even when women do speak against its usage, as seen in the case of veiled Muslim women in Bangladesh. Women and girls reported that they were not consulted on biometric identification systems and said that they felt disrespected and violated when their headwear was adjusted during the registration process.

Exploitation of the collected data is another concern. Although significant data breaches of biometric management systems have yet to occur, the threat remains, and exposure of sensitive biometric information would be catastrophic for refugee populations. First, unlike PIN codes and passwords, personal physical identifiers cannot be changed. Should data be stolen, fingerprints and iris scans would remain exposed. Given that many biometrics programs are accessible to state governments, it is unclear who has control over and access to refugees’ biometric information. A recent article by Dave Nyczepir outlines the U.S. Department of Homeland Security’s decision to move its cache of biometric data from U.S. citizens and foreign nationals to Amazon Web Services GovCloud. Since UNHCR sends refugee information to DHS, that means refugees’ data will be on the cloud as well. For women who are fleeing persecution or gender-based violence, breaches of this information can be deadly.

In short, although biometrics have improved the efficacy of refugee registration and identification, there are still several risks, especially for refugee women. Recognizing those problems would be a good first start to improving the systems in question.

Beyond that, using two or more methods of biometric data collection would help mitigate false matches. Stronger security systems are also critical. Should a data breach occur, establishing long-term protection policies in advance would likewise help to ensure the safety and security of refugee populations.

The 1951 United Nations Convention regarding the Status of Refugees specifies that states should provide identifying documents to all refugees within their territory, and builds on the UDHR to ensure the rights of refugees. Yet the threats to refugees’ human rights are different today compared to almost 70 years ago. The emergence of new technologies may warrant an update to the rights of refugees outlined in the convention, in particular, to ensure refugees’ security and privacy—both in their person and their digital identities. Such reforms could be a good jumping-off point for aid agencies to responsibly address the risks of false matching, discriminatory algorithms, and data breaches themselves.

Alexandria Polk is an intern at Equanimity Foundation and M.A. Candidate at Johns Hopkins SAIS.

Trending Now Sponsored Links by Taboola

By Taboola

More from Foreign Policy

By Taboola