With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID. In April 2019, our research began with this question, but it quickly became clear that a determination of the legitimacy of uses hinged on the fundamental attributes and governing structure of the Digital ID system itself. Our evaluation framework is intended as a series of questions against which Digital ID may be tested. We hope that these questions will inform the trade-offs that must be made while building and assessing identity programmes, to ensure that human rights are adequately protected.
Rule of Law Tests
Foundational Digital ID must only be implemented along with a legitimate regulatory framework that governs all aspects of Digital ID, including its aims and purposes, the actors who have access to it, etc. In the absence of this framework, there is nothing that precludes Digital IDs from being leveraged by public and private actors for purposes outside the intended scope of the programme. Our rule of law principles mandate that the governing law should be enacted by the legislature, be devoid of excessive delegation, be clear and accessible to the public, and be precise and limiting in its scope for discretion. These principles are substantiated by the criticism that the Kenyan Digital ID, the Huduma Namba, was met with when it was legalized through a Miscellaneous Amendment Act, meant only for small or negligible amendments and typically passed without any deliberation. These set of tests respond to the haste with which Digital ID has been implemented, often in the absence of an enabling law which adequately addresses its potential harms.
Rights based Tests
Digital ID, because of its collection of personal data and determination of eligibility and rights of users, intrinsically involves restrictions on certain fundamental rights. The use of Digital ID for essential functions of the State, including delivery of benefits and welfare, and maintenance of civil and sectoral records, enhance the impact of these restrictions. Accordingly, the entire identity framework, including its architecture, uses, actors, and regulators, must be evaluated at every stage against the rights it is potentially violating. Only then will we be able to determine if such violation is necessary and proportionate to the benefits it offers. In Jamaica, the National Identification and Registration Act, which mandated citizens’ biometric enrolment at the risk of criminal sanctions, was held to be a disproportionate violation of privacy, and therefore unconstitutional.
Risk based Tests
Even with a valid rule of law framework that seeks to protect rights, the design and use of Digital ID must be based on an analysis of the risks that the system introduces. This could take the form of choosing between a centralized and federated data-storage framework, based on the effects of potential failure or breach, or of restricting the uses of the Digital ID to limit the actors that will benefit from breaching it. Aside from the design of the system, the regulatory framework that governs it should also be tailored to the potential risks of its use. The primary rationale behind a risk assessment for an identity framework is that it should be tested not merely against universal metrics of legality and proportionality, but also against an examination of the risks and harms it poses. Implicit in a risk based assessment is also the requirement of implementing a responsive mitigation strategy to the risks identified, both while creating and governing the identity programme.
Digital ID programmes create an inherent power imbalance between the State and its residents because of the personal data they collect and the consequent determination of significant rights, potentially creating risks of surveillance, exclusion, and discrimination. The accountability and efficiency gains they promise must not lead to hasty or inadequate implementation.