Centre for Internet & Society

‘Future of Work’ in India’s IT/IT-es Sector

Posted by Aayush Rathi and Elonnai Hickok at Mar 05, 2020 07:50 PM |

The Centre for Internet and Society has recently undertaken research into the impact of Industry 4.0 on work in India. Industry 4.0, for the purposes of the research, is conceptualised as the technical integration of cyber physical systems (CPS) into production and logistics and the use of the ‘internet of things’ (connection between everyday objects) and services in (industrial) processes. By undertaking this research, CIS seeks to complement and contribute to the discourse and debates in India around the impact of Industry 4.0. In furtherance of the same, this report seeks to explore several key themes underpinning the impact of Industry 4.0 specifically in the IT/IT-es sector and broadly on the nature of work itself.

Read More…

RBI Ban on Cryptocurrencies not backed by any data or statistics

In March 2020, the Supreme Court of India quashed the RBI order passed in 2018 that banned financial services firms from trading in virtual currency or cryptocurrency. Keeping this policy window in mind, the Centre for Internet & Society will be releasing a series of blog posts and policy briefs on cryptocurrency regulation in India

Read More…

Cryptocurrency Regulation in India – A brief history

In March 2020, the Supreme Court of India quashed the RBI order passed in 2018 that banned financial services firms from trading in virtual currency or cryptocurrency. Keeping this policy window in mind, the Centre for Internet & Society will be releasing a series of blog posts and policy briefs on cryptocurrency regulation in India

Read More…

A Compilation of Research on the PDP Bill

Posted by Pranav M B at Mar 05, 2020 05:55 AM |

The most recent step in India’s initiative to create an effective and comprehensive Data Protection regime was the call for comments to the Personal Data Protection Bill, 2019, which closed last month. Leading up to the comments, CIS has published numerous research pieces with the goal of providing a comprehensive overview of how this legislation would place India within the global scheme, and how the local situation has developed, as well as analysing its impacts on citizens’ rights.

Read More…

Governing ID: Kenya’s Huduma Namba Programme

Posted by Amber Sinha at Mar 02, 2020 01:19 PM |

In our fourth case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in Kenya.

Read the case-study or download as PDF.

Governing ID: Use of Digital ID in the Healthcare Sector

Posted by Shruti Trikanad at Mar 02, 2020 01:05 PM |

In our third case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in the healthcare sector.

null

Read the case-study or download as PDF.

Governing ID: India’s Unique Identity Programme

Posted by Vrinda Bhandari at Mar 02, 2020 11:38 AM |

In our second case-study, we use our Evaluation Framework for Digital ID to assess India’s Unique Identity Programme.

Read the case-study or download as PDF.

Governing ID: 
Use of Digital ID for Verification

Posted by Shruti Trikanad at Mar 02, 2020 11:15 AM |

This is the first in a series of case studies, using our recently-published Evaluation Framework for Digital ID. It looks at the use of digital identity programmes for the purpose of verification, often using the process of deduplication.

null

Read the case-study or download as PDF.

 

Governing ID: A Framework for Evaluation of Digital Identity

Posted by Vrinda Bhandari, Shruti Trikanad, and Amber Sinha at Mar 02, 2020 08:35 AM |

As governments across the globe implement new and foundational digital identification systems (Digital ID), or modernize existing ID programs, there is an urgent need for more research and discussion about appropriate uses of Digital ID systems. This significant momentum for creating Digital ID has been accompanied with concerns about privacy, surveillance and exclusion harms of state-issued Digital IDs in several parts of the world, resulting in campaigns and litigations in countries, such as UK, India, Kenya, and Jamaica. Given the sweeping range of considerations required to evaluate Digital ID projects, it is necessary to formulate evaluation frameworks that can be used for this purpose.

This work began with the question of what the appropriate uses of Digital ID can be, but through the research process, it became clear that the question of use cannot be divorced from the fundamental attributes of Digital ID systems and their governance structures. This framework provides tests, which can be used to evaluate the governance of Digital ID across jurisdictions, as well as determine whether a particular use of Digital ID is legitimate. Through three kinds of checks — Rule of Law tests, Rights based tests, and Risks based tests — this scheme is a ready guide for evaluation of Digital ID.

null

 

View the framework or download as PDF.

Governing ID: Introducing our Evaluation Framework

Posted by Shruti Trikanad at Mar 02, 2020 08:05 AM |

With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID. In April 2019, our research began with this question, but it quickly became clear that a determination of the legitimacy of uses hinged on the fundamental attributes and governing structure of the Digital ID system itself. Our evaluation framework is intended as a series of questions against which Digital ID may be tested. We hope that these questions will inform the trade-offs that must be made while building and assessing identity programmes, to ensure that human rights are adequately protected.

Rule of Law Tests

Foundational Digital ID must only be implemented along with a legitimate regulatory framework that governs all aspects of Digital ID, including its aims and purposes, the actors who have access to it, etc. In the absence of this framework, there is nothing that precludes Digital IDs from being leveraged by public and private actors for purposes outside the intended scope of the programme. Our rule of law principles mandate that the governing law should be enacted by the legislature, be devoid of excessive delegation, be clear and accessible to the public, and be precise and limiting in its scope for discretion. These principles are substantiated by the criticism that the Kenyan Digital ID, the Huduma Namba, was met with when it was legalized through a Miscellaneous Amendment Act, meant only for small or negligible amendments and typically passed without any deliberation. These set of tests respond to the haste with which Digital ID has been implemented, often in the absence of an enabling law which adequately addresses its potential harms.

Rights based Tests

Digital ID, because of its collection of personal data and determination of eligibility and rights of users, intrinsically involves restrictions on certain fundamental rights. The use of Digital ID for essential functions of the State, including delivery of benefits and welfare, and maintenance of civil and sectoral records, enhance the impact of these restrictions. Accordingly, the entire identity framework, including its architecture, uses, actors, and regulators, must be evaluated at every stage against the rights it is potentially violating. Only then will we be able to determine if such violation is necessary and proportionate to the benefits it offers. In Jamaica, the National Identification and Registration Act, which mandated citizens’ biometric enrolment at the risk of criminal sanctions, was held to be a disproportionate violation of privacy, and therefore unconstitutional.

Risk based Tests

Even with a valid rule of law framework that seeks to protect rights, the design and use of Digital ID must be based on an analysis of the risks that the system introduces. This could take the form of choosing between a centralized and federated data-storage framework, based on the effects of potential failure or breach, or of restricting the uses of the Digital ID to limit the actors that will benefit from breaching it. Aside from the design of the system, the regulatory framework that governs it should also be tailored to the potential risks of its use. The primary rationale behind a risk assessment for an identity framework is that it should be tested not merely against universal metrics of legality and proportionality, but also against an examination of the risks and harms it poses. Implicit in a risk based assessment is also the requirement of implementing a responsive mitigation strategy to the risks identified, both while creating and governing the identity programme.

Digital ID programmes create an inherent power imbalance between the State and its residents because of the personal data they collect and the consequent determination of significant rights, potentially creating risks of surveillance, exclusion, and discrimination. The accountability and efficiency gains they promise must not lead to hasty or inadequate implementation.

Divergence between the General Data Protection Regulation and the Personal Data Protection Bill, 2019

Posted by Pallavi Bedi at Feb 21, 2020 11:08 AM |

Our note on the divergence between the General Data Protection Regulation and the Personal Data Protection Bill can be downloaded as a PDF here.

The European Union’s General Data Protection Regulation (GDPR), replacing the 1995 EU Data Protection Directive came into effect in May 2018. It harmonises the data protection regulations across the European Union. In India, the Ministry of Electronics and Information Technology had constituted a Committee of Experts (chaired by Justice Srikrishna) to frame recommendations for a data protection framework in India. The Committee submitted its report and a draft Personal Data Protection Bill in July 2018 (2018 Bill). Public comments were sought on the bill till October 2018. The Central Government revised the Bill and introduced the revised version of the Personal Data Protection Bill (PDP Bill) on December 11, 2019 in the Lok Sabha.

The PDP Bill has incorporated certain aspects of the GDPR, such as requirements for notice to be given to the data principal, consent for processing of data, establishment of a data protection authority, etc. However, there are some differences and in this note we have highlighted the areas of divergence between the two. It only includes provisions which are common to the GDPR and the PDP Bill. It does not include the provisions on (i) Appellate Tribunal, (ii) Finance, Account and Audit; and (iii) Non- Personal Data. 

Content takedown and users' rights

Posted by Torsha Sarkar, Gurshabad Grover at Feb 14, 2020 08:40 AM |

After Shreya Singhal v Union of India, commentators have continued to question the constitutionality of the content takedown regime under Section 69A of the IT Act (and the Blocking Rules issued under it). There has also been considerable debate around how the judgement has changed this regime: specifically about (i) whether originators of content are entitled to a hearing, (ii) whether Rule 16 of the Blocking Rules, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative, and (iii) the effect of Rule 16 on the rights of the originator and the public to challenge executive action. In this opinion piece, we attempt to answer some of these questions.

Read More…

Comments to the Personal Data Protection Bill 2019

Posted by Amber Sinha, Elonnai Hickok, Pallavi Bedi, Shweta Mohandas, Tanaya Rajwade at Feb 12, 2020 12:00 PM |

The Personal Data Protection Bill, 2019 was introduced in the Lok Sabha on December 11, 2019.

Read More…

Automated Facial Recognition Systems and the Mosaic Theory of Privacy: The Way Forward

Posted by Arindrajit Basu, Siddharth Sonkar at Jan 02, 2020 02:12 PM |

Arindrajit Basu and Siddharth Sonkar have co-written this blog as the third of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?

Read More…

Automated Facial Recognition Systems (AFRS): Responding to Related Privacy Concerns

Posted by Arindrajit Basu, Siddharth Sonkar at Jan 02, 2020 02:09 PM |

Arindrajit Basu and Siddharth Sonkar have co-written this blog as the second of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?

Read More…

Decrypting Automated Facial Recognition Systems (AFRS) and Delineating Related Privacy Concerns

Posted by Arindrajit Basu, Siddharth Sonkar at Jan 02, 2020 02:00 PM |

Arindrajit Basu and Siddharth Sonkar have co-written this blog as the first of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?

Read More…

Extra-Territorial Surveillance and the Incapacitation of Human Rights

Posted by Arindrajit Basu at Dec 31, 2019 10:55 AM |

This paper was published in Volume 12 (2) of the NUJS Law Review.

Read More…

ICANN takes one step forward in its human rights and accountability commitments

Posted by Akriti Bopanna and Ephraim Percy Kenyanito at Dec 17, 2019 01:55 PM |

Akriti Bopanna and Ephraim Percy Kenyanito take a look at ICANN's Implementation Assessment Report for the Workstream 2 recommendations and break down the key human rights considerations in it. Akriti chairs the Cross Community Working Party on Human Rights at ICANN and Ephraim works on Human Rights and Business for Article 19, leading their ICANN engagement.

Read More…

Call for Comments: Model Security Standards for the Indian Fintech Industry

The Centre for Internet and Society is pleased to make available the Draft document of Model Security Standards for the Indian Fintech Industry, for feedback and comments from all stakeholders. The objective of this document which was first published in November 2019, is to ensure that the data of users is dealt with in a secure and safe manner by the Fintech Industry, and that smaller businesses in the Fintech industry have a specific standard to look at in order to limit their liabilities for any future breaches.

We invite any parties interested in the field of technology policy, including but not limited to lawyers, policy researchers, and engineers, to send in your feedback/comments on the draft document by the 16th of January 2020. We intend to publish our final draft by the end of January 2020. We look forward to receiving your contributions to make this document more comprehensive and effective. Please find a copy of the draft document here.

In Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space

Posted by Torsha Sarkar at Dec 12, 2019 04:54 PM |

A discussion is underway about the way social media platforms may have to operate within the tenets of constitutional protections of free speech.

Read More…

Filed under: