In our second case-study, we use our Evaluation Framework for Digital ID to assess India’s Unique Identity Programme.
Read the case-study or download as PDF.
The Centre for Internet and Society has recently undertaken research into the impact of Industry 4.0 on work in India. Industry 4.0, for the purposes of the research, is conceptualised as the technical integration of cyber physical systems (CPS) into production and logistics and the use of the ‘internet of things’ (connection between everyday objects) and services in (industrial) processes. By undertaking this research, CIS seeks to complement and contribute to the discourse and debates in India around the impact of Industry 4.0. In furtherance of the same, this report seeks to explore several key themes underpinning the impact of Industry 4.0 specifically in the IT/IT-es sector and broadly on the nature of work itself.
In March 2020, the Supreme Court of India quashed the RBI order passed in 2018 that banned financial services firms from trading in virtual currency or cryptocurrency. Keeping this policy window in mind, the Centre for Internet & Society will be releasing a series of blog posts and policy briefs on cryptocurrency regulation in India
In March 2020, the Supreme Court of India quashed the RBI order passed in 2018 that banned financial services firms from trading in virtual currency or cryptocurrency. Keeping this policy window in mind, the Centre for Internet & Society will be releasing a series of blog posts and policy briefs on cryptocurrency regulation in India
The most recent step in India’s initiative to create an effective and comprehensive Data Protection regime was the call for comments to the Personal Data Protection Bill, 2019, which closed last month. Leading up to the comments, CIS has published numerous research pieces with the goal of providing a comprehensive overview of how this legislation would place India within the global scheme, and how the local situation has developed, as well as analysing its impacts on citizens’ rights.
In our fourth case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in Kenya.
Read the case-study or download as PDF.
In our third case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in the healthcare sector.
Read the case-study or download as PDF.
In our second case-study, we use our Evaluation Framework for Digital ID to assess India’s Unique Identity Programme.
Read the case-study or download as PDF.
This is the first in a series of case studies, using our recently-published Evaluation Framework for Digital ID. It looks at the use of digital identity programmes for the purpose of verification, often using the process of deduplication.
Read the case-study or download as PDF.
This submission presents counter-comments by CIS in response to the consultation paper floated by the TRAI on the topic of ‘Traffic Management Practices (TMPs) and Multi-Stakeholder Body for Net Neutrality’. These counter-comments take stock of the submissions made by commentators on these issue, and also CIS’ previous work on areas of net neutrality.
As governments across the globe implement new and foundational digital identification systems (Digital ID), or modernize existing ID programs, there is an urgent need for more research and discussion about appropriate uses of Digital ID systems. This significant momentum for creating Digital ID has been accompanied with concerns about privacy, surveillance and exclusion harms of state-issued Digital IDs in several parts of the world, resulting in campaigns and litigations in countries, such as UK, India, Kenya, and Jamaica. Given the sweeping range of considerations required to evaluate Digital ID projects, it is necessary to formulate evaluation frameworks that can be used for this purpose.
This work began with the question of what the appropriate uses of Digital ID can be, but through the research process, it became clear that the question of use cannot be divorced from the fundamental attributes of Digital ID systems and their governance structures. This framework provides tests, which can be used to evaluate the governance of Digital ID across jurisdictions, as well as determine whether a particular use of Digital ID is legitimate. Through three kinds of checks — Rule of Law tests, Rights based tests, and Risks based tests — this scheme is a ready guide for evaluation of Digital ID.
With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID. In April 2019, our research began with this question, but it quickly became clear that a determination of the legitimacy of uses hinged on the fundamental attributes and governing structure of the Digital ID system itself. Our evaluation framework is intended as a series of questions against which Digital ID may be tested. We hope that these questions will inform the trade-offs that must be made while building and assessing identity programmes, to ensure that human rights are adequately protected.
Foundational Digital ID must only be implemented along with a legitimate regulatory framework that governs all aspects of Digital ID, including its aims and purposes, the actors who have access to it, etc. In the absence of this framework, there is nothing that precludes Digital IDs from being leveraged by public and private actors for purposes outside the intended scope of the programme. Our rule of law principles mandate that the governing law should be enacted by the legislature, be devoid of excessive delegation, be clear and accessible to the public, and be precise and limiting in its scope for discretion. These principles are substantiated by the criticism that the Kenyan Digital ID, the Huduma Namba, was met with when it was legalized through a Miscellaneous Amendment Act, meant only for small or negligible amendments and typically passed without any deliberation. These set of tests respond to the haste with which Digital ID has been implemented, often in the absence of an enabling law which adequately addresses its potential harms.
Digital ID, because of its collection of personal data and determination of eligibility and rights of users, intrinsically involves restrictions on certain fundamental rights. The use of Digital ID for essential functions of the State, including delivery of benefits and welfare, and maintenance of civil and sectoral records, enhance the impact of these restrictions. Accordingly, the entire identity framework, including its architecture, uses, actors, and regulators, must be evaluated at every stage against the rights it is potentially violating. Only then will we be able to determine if such violation is necessary and proportionate to the benefits it offers. In Jamaica, the National Identification and Registration Act, which mandated citizens’ biometric enrolment at the risk of criminal sanctions, was held to be a disproportionate violation of privacy, and therefore unconstitutional.
Even with a valid rule of law framework that seeks to protect rights, the design and use of Digital ID must be based on an analysis of the risks that the system introduces. This could take the form of choosing between a centralized and federated data-storage framework, based on the effects of potential failure or breach, or of restricting the uses of the Digital ID to limit the actors that will benefit from breaching it. Aside from the design of the system, the regulatory framework that governs it should also be tailored to the potential risks of its use. The primary rationale behind a risk assessment for an identity framework is that it should be tested not merely against universal metrics of legality and proportionality, but also against an examination of the risks and harms it poses. Implicit in a risk based assessment is also the requirement of implementing a responsive mitigation strategy to the risks identified, both while creating and governing the identity programme.
Digital ID programmes create an inherent power imbalance between the State and its residents because of the personal data they collect and the consequent determination of significant rights, potentially creating risks of surveillance, exclusion, and discrimination. The accountability and efficiency gains they promise must not lead to hasty or inadequate implementation.
Our note on the comparison of the Personal Data Protection Bill with the General Data Protection Regulation an the California Consumer Protection Act can be downloaded as a PDF here
The European Union’s General Data Protection Regulation (GDPR), replacing the 1995 EU Data Protection Directive came into effect in May 2018. It harmonises the data protection regulations across the European Union. In 2018, California passed the Consumer Protection Act (CCPA), to enhance the privacy protection of residents of California. The CCPA came into effect from January 1, 2020, however, the California Attorney General has not begun enforcing the law as yet. The Attorney General will be allowed to take action six months after the rules are finalised, or on July 1, 2020, whichever is earlier
While the PDP Bill incorporates several concepts of the CCPA and the GDPR, there are also significant areas of divergence. We have prepared the following charts to compare the PDP Bill with the GDPR and the CCPA on the following points: (i) Jurisdiction and scope (ii) Rights of the Data Principal; (iii) Obligations of the Data Fiduciaries; (iv) Exemptions; (v) Data Protection Authority; and (vi) Breach of Personal Data. It is not a comprehensive list of all requirements under the three regulations.
The charts are based on the comparative charts prepared by the Future of Privacy Forum.
Our note on the divergence between the General Data Protection Regulation and the Personal Data Protection Bill can be downloaded as a PDF here.
The European Union’s General Data Protection Regulation (GDPR), replacing the 1995 EU Data Protection Directive came into effect in May 2018. It harmonises the data protection regulations across the European Union. In India, the Ministry of Electronics and Information Technology had constituted a Committee of Experts (chaired by Justice Srikrishna) to frame recommendations for a data protection framework in India. The Committee submitted its report and a draft Personal Data Protection Bill in July 2018 (2018 Bill). Public comments were sought on the bill till October 2018. The Central Government revised the Bill and introduced the revised version of the Personal Data Protection Bill (PDP Bill) on December 11, 2019 in the Lok Sabha.
The PDP Bill has incorporated certain aspects of the GDPR, such as requirements for notice to be given to the data principal, consent for processing of data, establishment of a data protection authority, etc. However, there are some differences and in this note we have highlighted the areas of divergence between the two. It only includes provisions which are common to the GDPR and the PDP Bill. It does not include the provisions on (i) Appellate Tribunal, (ii) Finance, Account and Audit; and (iii) Non- Personal Data.
The Bill gives the Centre the power to designate certain social media intermediaries as significant data fiduciaries.
On November 16, 2019, The Centre for Internet and Society invited officials from the Department of Labour (Government of Karnataka), members of domestic worker unions, domestic workers, company representatives, and civil society researchers at the Student Christian Mission of India House to discuss preliminary findings of an ongoing research project and facilitate a multistakeholder consultation to understand the contemporaneous platformisation of domestic work in India. Please find here a report from this consultation authored by Tasneem Mewa.
Arindrajit Basu taught a course on various prospects and challenges of global governance at NUJS, including the geo-politics of emerging technologies.
After Shreya Singhal v Union of India, commentators have continued to question the constitutionality of the content takedown regime under Section 69A of the IT Act (and the Blocking Rules issued under it). There has also been considerable debate around how the judgement has changed this regime: specifically about (i) whether originators of content are entitled to a hearing, (ii) whether Rule 16 of the Blocking Rules, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative, and (iii) the effect of Rule 16 on the rights of the originator and the public to challenge executive action. In this opinion piece, we attempt to answer some of these questions.
We published a Call for Researchers on January 10, 2020, to invite applications from researchers interested in writing a narrative essay that interrogates the modes of surveillance that people of LGBTHIAQ+ and gender non-conforming identities and sexual orientations are put under as they seek sexual and reproductive health (SRH) services in India. We received 29 applications from over 10 locations in India in response to the call, and are truly overwhelmed by and grateful for this interest and support. We eventually selected applications by 3 researchers that we felt aligned best with the specific objectives of the project. Please find below brief profile notes of the selected researchers.
The Personal Data Protection Bill, 2019 was introduced in the Lok Sabha on December 11, 2019.
Ever wondered who gains from the way we work?