After a twitter discussion on this article with some questions on the readability, I decided to add commentary between the paragraphs to help people coming to the article fresh to see where it fits into ID debates.
The original article is available at https://journals.sagepub.com/doi/full/10.1177/20539517211006744 and is Open Access. I reproduce it here for critical engagement and the purposes of review. This page is not listed by search engines.
A pronounced tension is emerging in debates around datafication and technology use in the aid sector. There is a growing tendency among international organisations, scholars and commentators to depict aid industry data practices in unhelpfully polarised terms. On the one hand, the use of data technologies in aid interventions is treated by aid organisations and their commercial partners as a straightforward means of increasing the inclusion, recognition and empowerment of affected populations, often with minimal acknowledgement of the attendant risks.
So, the mainstream of aid orgs are saying that they’re just doing digital transformation and fraud prevention.
On the other hand, scholars and civil society organisations tend to present the use of data technologies as harm-inducing ‘technosolutionism’ (Molnar, 2020: 34) or ‘technocolonialism’ (Madianou, 2019) fuelled by the neoliberal logics of surveillance and capitalist value extraction.
The last few years have had lots of organisations fall out of love with tech, especially in situations where it is used to push western systems (implicitly in the data model [technocolonialism]) so that people can only be understood in the Western context rather than their own.
These critical responses have catalysed action and advocacy around privacy, non-discrimination and other human rights, providing an essential counterweight to narratives of technological utopianism. In this commentary, however, we suggest that the current polarisation forecloses dialogue and learning between the key actors deploying and evaluating data technologies in aid. Furthermore, it stunts deeper empirical analysis of and serious engagement with the diverse perspectives of so-called beneficiary communities.
Because the debate is fractious, the critiques of the system are not being heard. And that also affects research as critics are less likely to be granted research access to camps.
Debates surrounding the global COVID-19 pandemic remind us that, while critical data studies mark ‘surveillance’ as strongly negative, medical discourse (which powerfully informs humanitarian discourse) treats ‘surveillance’ as largely positive (cf. Hay et al., 2013). In medicine, surveillance refers not only to public health data collection and analysis, as in the control of infectious diseases, but also to the monitoring of an individual patient’s symptoms and responses to treatments. A more nuanced approach, then, will acknowledge these starkly different starting positions on surveillance as harm and as care (Armstrong, 1995).
There are no hard binaries. Some surveillance is good, some is bad. Especially in These Unprecedented Circumstances, you need to know some stuff to do public health work properly.
In this commentary, we describe the dialectical relationship between surveillance and recognition before drawing attention to the ambivalences of power inherent in digital identity interventions in aid. We accept the political realism of data governance (Clark and Albris, 2020) and so argue for a constructive research agenda to advance debate, policy and practice. In particular, we outline why and how researchers can achieve more theoretically careful, methodologically rigorous and empirically informed approaches to understanding data practices in aid. We focus on digital identity systems as an exemplary case of datafication in this space, and the polarised debates around it.
In this commentary, we acknowledge that surveillance (probably bad) and recognition (probably good) are inseparably linked. We also know that you can’t take data out of the system now, so we just have to deal with how that’s done. We want researchers to interact with this system in a more informed way and lay out steps for that. Digital ID systems are a really clear way to talk about how data and politics bump up against each other and how the debates shape the politics.
Digital identity systems are information systems that typically support identity proofing, authentication and authorisation (Nyst et al., 2016: 28–29). The ability to prove that you are who you say you are enables access to many public and private sector services, and underpins essential humanitarian service provision, including cash transfers. The significance of digital identity systems in aid has been accelerated by their centrality to COVID-19 responses (Masiero, 2020) and has increased aid and government stakeholders’ dependence on these systems.
Lots of refugee camps use biometrics as a way to access entitlements, in part because people arriving are often undocumented. This is linked to anti-fraud work where cash and entitlements are involved and has accelerated due to Covid. They are part of the system in a way that it is unlikely they could be removed.
Debates about the implications of digital identity are particularly polarised among the diverse groups involved. For example, from a critical research perspective, Latonero (2019) focuses on identity case studies to describe aid industry data collection systems as ‘surveillance humanitarianism’. This framing has been widely taken up. In a recent report, UN Special Rapporteur Achiume foregrounds the risks that datafied humanitarian identity systems bring to vulnerable populations (Achiume, 2020: 12). In contrast, and despite such criticism, the development community prioritises UN Sustainable Development Goal 16.9 (‘legal identity for all’) and celebrates International Identity Day (Crowcroft et al., 2020).
Even different bits of the UN are split on whether this approach is good or bad. Recognising people is often a risk for them, especially in a patchy space of protection or if their metadata will put them at risk of persecution. However, UNSDPs are really into paperwork.
In what follows, we examine the case of digital identity in humanitarian and development aid to argue for a depolarised approach to surveillance and recognition. A depolarised approach is equally wary of techno-apologetics and naïve empiricism as it is of reflex technophobic rhetoric. This opens up a research agenda capable of bringing researchers, technologists, aid organisations, civil society activists and aid subjects into dialogue. This commentary has wider implications since humanitarian settings often serve as global ‘technological testing grounds’ (Molnar, 2020).
We are trying to find a middle ground that understands the reasons for these ID systems and their critiques but is not beholden to either framework.