The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 1 to 15.
How Function Of State May Limit Informed Consent: Examining Clause 12 Of The Data Protection Bill
https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function
<b>The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.</b>
<p>The blog post was <a class="external-link" href="https://www.medianama.com/2022/02/223-data-protection-bill-consent-clause-state-function/">published in Medianama</a> on February 18, 2022. This is the first of a two-part series by Amber Sinha.</p>
<hr />
<p style="text-align: justify; ">In 2018, hours after the Committee of Experts led by Justice Srikrishna Committee released their report and draft bill, I wrote <a href="https://www.livemint.com/Opinion/zY8NPWoWWZw8AfI5JQhjmL/Draft-privacy-bill-and-its-loopholes.html">an opinion piece</a> providing my quick take on what was good and bad about the bill. A section of my analysis focused on Clause 12 (then Clause 13) which provides for non-consensual processing of personal data for state functions. I called this provision a ‘carte-blanche’ which effectively allowed the state to process a citizen’s data for practically all interactions between them without having to deal with the inconvenience of seeking consent. My former colleague, Pranesh Prakash <a href="https://twitter.com/pranesh/status/1023116679440621568">pointed out</a> that this was not a correct interpretation of the provision as I had missed the significance of the word ‘necessary’ which was inserted to act as a check on the powers of the state. He also pointed out, correctly, that in its construction, this provision is equivalent to the position in European General Data Protection Regulation (Article 6 (i) (e)), and is perhaps even more restrictive.</p>
<p style="text-align: justify; ">While I agree with what Pranesh says above (his claims are largely factual, and there can be no basis for disagreement), my view of Clause 12 has not changed. While Clause 35 has been a focus of considerable discourse and analysis, for good reason, I continue to believe that Clause 12 remains among the most dangerous provisions of this bill, and I will try to unpack here, why.</p>
<p style="text-align: justify; ">The Data Protection Bill 2021 has a chapter on the grounds for processing personal data, and one of those grounds is consent by the individual. The rest of the grounds deal with various situations in which personal data can be processed without seeking consent from the individual. Clause 12 lays down one of the grounds. It allows the state to process data without the consent of the individual in the following cases —</p>
<p>a) where it is necessary to respond to a medical emergency<br />b) where it is necessary for state to provide a service or benefit to the individual<br />c) where it is necessary for the state to issue any certification, licence or permit<br />d) where it is necessary under any central or state legislation, or to comply with a judicial order<br />e) where it is necessary for any measures during an epidemic, outbreak or public health<br />f) where it is necessary for safety procedures during disaster or breakdown of public order</p>
<p>In order to carry out (b) and (c), there is also the added requirement that the state function must be authorised by law.</p>
<h2>Twin restrictions in Clause 12</h2>
<p style="text-align: justify; ">The use of the words ‘necessary’ and ‘authorised by law’ is intended to pose checks on the powers of the state. The first restriction seeks to limit actions to only those cases where the processing of personal data would be necessary for the exercise of the state function. This should mean that if the state function can be exercised without non-consensual processing of personal data, then it must be done so. Therefore, while acting under this provision, the state should only process my data if it needs to do so, to provide me with the service or benefit. The second restriction means that this would apply to only those state functions which are authorised by law, meaning only those functions which are supported by validly enacted legislation.</p>
<p style="text-align: justify; ">What we need to keep in mind regarding Clause 12 is that the requirement of ‘authorised by law’ does not mean that legislation must provide for that specific kind of data processing. It simply means that the larger state function must have legal backing. The danger is how these provisions may be used with broad mandates. If the activity in question is non-consensual collection and processing of, say, demographic data of citizens to create state resident hubs which will assist in the provision of services such as healthcare, housing, and other welfare functions; all that may be required is that the welfare functions are authorised by law.</p>
<h2 style="text-align: justify; ">Scope of privacy under Puttaswamy</h2>
<p style="text-align: justify; ">It would be worthwhile, at this point, to delve into the nature of restrictions that the landmark Puttaswamy judgement discussed that the state can impose on privacy. The judgement clearly identifies the principles of informed consent and purpose limitation as central to informational privacy. As discussed repeatedly during the course of the hearings and in the judgement, privacy, like any other fundamental right, is not absolute. However, restrictions on the right must be reasonable in nature. In the case of Clause 12, the restrictions on privacy in the form of denial of informed consent need to be tested against a constitutional standard. In Puttaswamy, the bench was not required to provide a legal test to determine the extent and scope of the right to privacy, but they do provide sufficient guidance for us to contemplate how the limits and scope of the constitutional right to privacy could be determined in future cases.</p>
<p style="text-align: justify; ">The Puttaswamy judgement clearly states that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.” By locating the right not just in Article 21 but also in the entirety of Part III, the bench clearly requires that “the drill of various Articles to which the right relates must be scrupulously followed.” This means that where transgressions on privacy relate to different provisions in Part III, the different tests under those provisions will apply along with those in Article 21. For instance, where the restrictions relate to personal freedoms, the tests under both Article 19 (right to freedoms) and Article 21 (right to life and liberty) will apply.</p>
<p style="text-align: justify; ">In the case of Clause 12, the three tests laid down by Justice Chandrachud are most operative —<br />a) the existence of a “law”<br />b) a “legitimate State interest”<br />c) the requirement of “proportionality”.</p>
<p style="text-align: justify; ">The first test is already reflected in the use of the phrase ‘authorised by law’ in Clause 12. The test under Article 21 would imply that the function of the state should not merely be authorised by law, but that the law, in both its substance and procedure, must be ‘fair, just and reasonable.’ The next test is that of ‘legitimate state interest’. In its report, the Joint Parliamentary Committee places emphasis on Justice Chandrachud’s use of “allocation of resources for human development” in an illustrative list of legitimate state interests. The report claims that the ground, functions of the state, thus satisfies the legitimate state interest. We do not dispute this claim.</p>
<h2 style="text-align: justify; ">Proportionality and Clause 12</h2>
<p style="text-align: justify; ">It is the final test of ‘proportionality’ articulated by the Puttaswamy judgement, which is most operative in this context. Unlike Clauses 42 and 43 which include the twin tests of necessity and proportionality, the committee has chosen to only employ one ground in Clause 12. Proportionality is a commonly employed ground in European jurisprudence and common law countries such as Canada and South Africa, and it is also an integral part of Indian jurisprudence. As commonly understood, the proportionality test consists of three parts —</p>
<p>a) the limiting measures must be carefully designed, or rationally connected, to the objective<br />b) they must impair the right as little as possible<br />c) the effects of the limiting measures must not be so severe on individual or group rights that the legitimate state interest, albeit important, is outweighed by the abridgement of rights.</p>
<p style="text-align: justify; ">The first test is similar to the test of proximity under Article 19. The test of ‘necessity’ in Clause 12 must be viewed in this context. It must be remembered that the test of necessity is not limited to only situations where it may not be possible to obtain consent while providing benefits. My reservations with the sufficiency of this standard stem from observations made in the report, as well as the relatively small amount of jurisprudence on this term in Indian law.</p>
<p style="text-align: justify; ">The Srikrishna Report interestingly mentions three kinds of scenarios where consent should not be required — where it is not appropriate, necessary, or relevant for processing. The report goes on to give an example of inappropriateness. In cases where data is being gathered to provide welfare services, there is an imbalance in power between the citizen and the state. Having made that observation, the committee inexplicably arrives at a conclusion that the response to this problem is to further erode the power available to citizens by removing the need for consent altogether under Clause 12. There is limited jurisprudence on the standard of ‘necessity’ under Indian law. The Supreme Court has articulated this test as ‘having reasonable relation to the object the legislation has in view.’ If we look elsewhere for guidance on how to read ‘necessity’, the ECHR in Handyside v United Kingdom held it to be neither “synonymous with indispensable” nor does it have the “flexibility of such expressions as admissible, ordinary, useful, reasonable or desirable.” In short, there must be a pressing social need to satisfy this ground.</p>
<p style="text-align: justify; ">However, the other two tests of proportionality do not find a mention in Clause 12 at all. There is no requirement of ‘narrow tailoring’, that the scope of non-consensual processing must impair the right as little as possible. It is doubly unfortunate that this test does not find a place, as unlike necessity, ‘narrow tailoring’ is a test well understood in Indian law. This means that while there is a requirement to show that processing personal data was necessary to provide a service or benefit, there is no requirement to process data in a way that there is minimal non-consensual processing. The fear is that as long as there is a reasonable relation between processing data and the object of the function of state, state authorities and other bodies authorised by it, do not need to bother with obtaining consent.</p>
<p style="text-align: justify; ">Similarly, the third test of proportionality is also not represented in this provision. It provides a test between the abridgement of individual rights and legitimate state interest in question, and it requires that the first must not outweigh the second. The absence of the proportionality test leaves Clause 12 devoid of any such consideration. Therefore, as long as the test of necessity is met under this law, it need not evaluate the denial of consent against the service or benefit that is being provided.</p>
<p style="text-align: justify; ">The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state, by setting the threshold to circumvent informed consent extremely low. In the next post, I will demonstrate the ease with which Clause 12 can allow indiscriminate data sharing by focusing on the Indian government’s digital healthcare schemes.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function'>https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function</a>
</p>
No publisheramberData GovernanceInternet GovernanceData ProtectionPrivacy2022-03-01T14:56:49ZBlog EntryClause 12 Of The Data Protection Bill And Digital Healthcare: A Case Study
https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study
<b>In light of the state’s emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?</b>
<p>The blog post was <a class="external-link" href="https://www.medianama.com/2022/02/223-data-protection-bill-digital-healthcare-case-study/">published in Medianama</a> on February 21, 2022. This is the second in a two-part series by Amber Sinha.</p>
<hr />
<p style="text-align: justify; ">In the <a href="https://www.medianama.com/2022/02/223-data-protection-bill-consent-clause-state-function/">previous post</a>, I looked at provisions on non-consensual data processing for state functions under the most recent version of recommendations by the Joint Parliamentary Committee on India’s Data Protection Bill (DPB). The true impact of these provisions can only be appreciated in light of ongoing policy developments and real-life implications.</p>
<p style="text-align: justify; ">To appreciate the significance of the dilutions in Clause 12, let us consider the Indian state’s range of schemes promoting digital healthcare. In July 2018, NITI Aayog, a central government policy think tank in India released a strategy and approach paper (Strategy Paper) on the formulation of the National Health Stack which envisions the creation of a federated application programming interface (API)-enabled health information ecosystem. While the Ministry of Health and Family Welfare has focused on the creation of Electronic Health Records (EHR) Standards for India during the last few years and also identified a contractor for the creation of a centralised health information platform (IHIP), this Strategy Paper advocates a completely different approach, which is described as a Personal Health Records (PHR) framework. In 2021, the National Digital Health Mission (NDHM) was launched under which a citizen shall have the option to obtain a digital health ID. A digital health ID is a unique ID and will carry all health records of a person.</p>
<h2 style="text-align: justify; ">A Stack Model for Big Data Ecosystem in Healthcare</h2>
<p style="text-align: justify; ">A stack model as envisaged in the Strategy Paper, consists of several layers of open APIs connected to each other, often tied together by a unique health identifier. The open nature of APIs has the advantage that it allows public and private actors to build solutions on top of it, which are interoperable with all parts of the stack. It is however worth considering both the ‘openness’ and the role that the state plays in it.</p>
<p style="text-align: justify; ">Even though the APIs are themselves open, they are a part of a pre-decided technological paradigm, built by private actors and blessed by the state. Even though innovators can build on it, the options available to them are limited by the information architecture created by the stack model. When such a technological paradigm is created for healthcare reform and health data, the stack model poses additional challenges. By tying the stack model to the unique identity, without appropriate processes in place for access control, siloed information, and encrypted communication, the stack model poses tremendous privacy and security concerns. The broad language under Clause 12 of the DPB needs to be looked at in this context.</p>
<p>Clause 12 allows non-consensual processing of personal data where it is necessary “for the performance of any function of the state authorised by law” in order to provide a service or benefit from the State. In the previous post, I had highlighted the import of the use of only ‘necessity’ to the exclusion of ‘proportionality’. Now, we need to consider its significance in light of the emerging digital healthcare apparatus being created by the state.</p>
<p style="text-align: justify; ">The National Health Stack and National Digital Health Mission together envision an intricate system of data collection and exchange which in a regulatory vacuum would ensure unfettered access to sensitive healthcare data for both the state and private actors registered with the platforms. The Stack framework relies on repositories where data may be accessed from multiple nodes within the system. Importantly, the Strategy Paper also envisions health data fiduciaries to facilitate consent-driven interaction between entities that generate the health data and entities that want to consume the health records for delivering services to the individual. The cast of characters involve the National Health Authority, health care providers and insurers who access the National Health Electronic Registries, unified data from different programmes such as National Health Resource Repository (NHRR), NIN database, NIC and the Registry of Hospitals in Network of Insurance (ROHINI), private actors such as Swasth, iSpirt who assist the Mission as volunteers. The currency that government and private actors are interested in is data.</p>
<p style="text-align: justify; ">The promised benefits of healthcare data in an anonymised and aggregate form range from Disease Surveillance to Pharmacovigilance as well as Health Schemes Management Systems and Nutrition Management, benefits which have only been more acutely emphasised during the pandemic. However, the pandemic has also normalised the sharing of sensitive healthcare data with a variety of actors, without much thinking on much-needed data minimisation practises.</p>
<p style="text-align: justify; ">The potential misuses of healthcare data include greater state surveillance and control, predatory and discriminatory practices by private actors which rely on Clause 12 to do away with even the pretense of informed consent so long as the processing of data is deemed necessary by the state and its private sector partners to provide any service or benefit.</p>
<p style="text-align: justify; ">Subclause (e) in Clause 12, which was added in the last version of the Bill drafted by MeitY and has been retained by the JPC, allows processing wherever it is necessary for ‘any measures’ to provide medical treatment or health services during an epidemic, outbreak or threat to public health. Yet again, the overly-broad language used here is designed to ensure that any annoyances of informed consent can be easily brushed aside wherever the state intends to take any measures under any scheme related to public health.</p>
<p style="text-align: justify; ">Effectively, how does the framework under Clause 12 alter the consent and purpose limitation model? Data protection laws introduce an element of control by tying purpose limitation to consent. Individuals provide consent to specified purposes, and data processors are required to respect that choice. Where there is no consent, the purposes of data processing are sought to be limited by the necessity principle in Clause 12. The state (or authorised parties) must be able to demonstrate necessity to the exercise of state function, and data must only be processed for those purposes which flow out of this necessity. However, unlike the consent model, this provides an opportunity to keep reinventing purposes for different state functions.</p>
<p style="text-align: justify; ">In the absence of a data protection law, data collected by one agency is shared indiscriminately with other agencies and used for multiple purposes beyond the purpose for which it was collected. The consent and purpose limitation model would have addressed this issue. But, by having a low threshold for non-consensual processing under Clause 12, this form of data processing is effectively being legitimised.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study'>https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study</a>
</p>
No publisheramberData GovernanceInternet GovernanceData ProtectionPrivacy2022-03-01T15:07:44ZBlog EntryReconfiguring Data Governance: Insights from India and the EU
https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu
<b>This policy paper is the result of a workshop organised jointly by the Tilburg Institute of Law, Technology and Society, Netherlands, the Centre for Communication Governance at the National Law University Delhi, India and the Centre for Internet & Society, India in January, 2023. The workshop brought together a number of academics, researchers, and industry representatives in Delhi to discuss a range of issues at the core of data governance theory and practice. </b>
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/ReconfiguringDataGovernance.png/@@images/70165fe1-cc66-4cac-9f99-b7485c87218a.png" alt="Reconfiguring Data Governance" class="image-inline" title="Reconfiguring Data Governance" /></p>
<p style="text-align: justify; ">The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.</p>
<p style="text-align: justify; ">This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.</p>
<p style="text-align: justify; ">This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet & Society, India.</p>
<hr />
<p>Click to download the <a class="external-link" href="http://cis-india.org/internet-governance/files/reconfiguring-data-governance.pdf"><b>report</b></a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu'>https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu</a>
</p>
No publisherSwati Punia, Srishti Joshi, Siddharth Peter De Souza, Linnet Taylor, Jhalak M. Kakkar, Isha Suri, Arindrajit Basu, and Anushka MittalInternet GovernanceData GovernanceData ProtectionData Management2024-02-20T00:30:00ZBlog EntryComments on the Statistical Disclosure Control Report
https://cis-india.org/internet-governance/comments-on-the-statistical-disclosure-control-report
<b>This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Statistical Disclosure Control Report published on March 30th by Ministry of Statistics and Programme Implementation.
</b>
<p><strong id="docs-internal-guid-a12fe2b3-c746-4c1a-0287-1814414668af"><br /></strong></p>
<h3 style="text-align: justify;" dir="ltr">1. PRELIMINARY</h3>
<p style="text-align: justify;" dir="ltr">This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Statistical Disclosure Control Report published on March 30th by Ministry of Statistics and Programme Implementation.</p>
<p style="text-align: justify;" dir="ltr">CIS is thankful for the opportunity to put forth its views.<br class="kix-line-break" />This submission is divided into three main parts. The first part, ‘Preliminary’, introduces the document; the second part, ‘About CIS’, is an overview of the organization; and, the third part contains the ‘Comments’.<br class="kix-line-break" /><br class="kix-line-break" /></p>
<h3 style="text-align: justify;" dir="ltr">2. ABOUT CIS</h3>
<p style="text-align: justify;" dir="ltr">CIS is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, freedom of speech and expression, intermediary liability, digital privacy, and cybersecurity.<br class="kix-line-break" /><br /></p>
<p style="text-align: justify;" dir="ltr">CIS values the fundamental principles of justice, equality, freedom and economic development. This submission is consistent with CIS' commitment to these values, the safeguarding of general public interest and the protection of India's national interest at the international level. Accordingly, the comments in this submission aim to further these principles.</p>
<h3 style="text-align: justify;" dir="ltr">3. Comments</h3>
<h4 style="text-align: justify;" dir="ltr">3.1 General Comments</h4>
<p style="text-align: justify;" dir="ltr">As a non-profit organisation we recognize the importance of the efforts by the Ministry of Statistics and Programme Implementation (MoSPI) to make the data you collect available to the public in open formats with relevant information about reliability of statistical estimates.</p>
<p><span style="text-align: justify;">We at CIS have recently released a report titled “Information Security Practices of Aadhaar (or lack thereof): A documentation of public availability of Aadhaar Numbers with sensitive personal financial information”. We encountered several central and state government departments collecting socioeconomic data from citizens, linking it with Aadhaar and even publishing them in exportable data formats like EXCEL and MS ACCESS Databases. </span><span style="text-align: justify;">While we understand this issue primarily concerns to Unique Identification Authority of India (UIDAI), the lack of standards around information/statistical disclosure are a general threat to transparency in a democracy and privacy of individuals. </span><span style="text-align: justify;">Going through the report we understand the committee is unable to prescribe a standard for other ministries and departments until they try and pilot these standards within Ministry of Statistics and Programme Implementation. This delay in prescribing the standards can be really dangerous in the current circumstances of massive data collection by government departments and linking all the databases with a unique identifier, Aadhaar Number. </span><span style="text-align: justify;">At the same time we understand the importance of data dissemination to be carried out and we recommend the following for improving the standards around data disclosure control.</span></p>
<h4 style="text-align: justify;" dir="ltr">3.2 Integrity of Information and Data</h4>
<p style="text-align: justify;" dir="ltr">We agree with the committee that the error rates need to be kept in mind while designing practices to convert raw data. But we request the process of changes being made be actively measured and documented. In case of errors being computed, guidelines can be made to decrease the possibilities of misinterpretation of errors causing loss of integrity of information. Statistics are important for decision making in governance, errors in computations can be biased towards millions of people. Statistical biases are important to be looked into while converting data from its raw format to make sure there are no damage caused by information.</p>
<h4 style="text-align: justify;" dir="ltr">3.3 Data Security</h4>
<p style="text-align: justify;" dir="ltr">One of the important issues around storage and publication of Aadhaar information is the lack of masking standards. With the availability of data from multiple departments, it is possible to reconstruct identification details by linking data from multiple databases. It is recommended to bring masking standards while personally identifiable micro data is being published. There is an urgent need for departments to also look at auditing access to information and tracking sharing of information. It is recommended the department digitally signs all the information and documents being published or shared by them to keep track of who had accessed the information and verifying the authenticity of information.</p>
<p style="text-align: justify;" dir="ltr">We request the department to define what exactly is “usage for statistical purposes only” and recommend standards to control and restrict usage of information for this purpose. It is important they design frameworks or mechanisms to allow others to report violations around this. This process should be transparent and documented heavily.</p>
<h4 style="text-align: justify;" dir="ltr">3.4 Anonymization of microdata</h4>
<p style="text-align: justify;" dir="ltr">We recommend the data being collected be anonymized at source to evade the possibility of the accidental disclosure of personally identifiable information. While the current anonymization efforts have been helpful, with steady increase in data mining and classification algorithms and practices it is recommended to evolve the standards around this area.</p>
<h4 style="text-align: justify;" dir="ltr">3.5 Data Dissemination</h4>
<p style="text-align: justify;" dir="ltr">Data dissemination is an important aspect for district statistics officers, we recommend they actively communicate their work through monthly newsletters, quarterly workshops to help improve the conversations around statistics and at the same time engage with the users who would benefit from the data.</p>
<p style="text-align: justify;" dir="ltr">We also recommend that data when being published includes metadata of collection, modification, storage and other important information. Also the information needs to be published in open formats which does not require proprietary software to be used to open them. At the same time data should be published in multiple formats like CSV, XLS, PDF,</p>
<p style="text-align: justify;" dir="ltr">The committee also recognizes the need for having data users part of discussions around important decisions and be part of committees. We would like the department to recognize our efforts and consider us for future committee representations.</p>
<p style="text-align: justify;" dir="ltr"> </p>
<p style="text-align: justify;" dir="ltr">Thank you for this opportunity and we look forward to work with you in future.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/comments-on-the-statistical-disclosure-control-report'>https://cis-india.org/internet-governance/comments-on-the-statistical-disclosure-control-report</a>
</p>
No publisherSrinivs Kodali and Amber SinhaCall for CommentsDigital AccessOpen DataOpen Government DataData ProtectionData GovernanceAadhaarDigitisationInformation SecurityOpennessInternet GovernanceData Management2019-03-13T00:28:44ZBlog EntryComments to National Digital Health Mission: Health Data Management Policy
https://cis-india.org/internet-governance/blog/comments-to-national-digital-health-mission-health-data-management-policy
<b>CIS has submitted comments to the National Health Data Management Policy. We welcome the opportunity provided to our comments on the Policy and we hope that the final Policy will consider the interests of all the stakeholders to ensure that it protects the privacy of the individual while encouraging a digital health ecosystem.
</b>
<p> </p>
<p>Read the full set of comments <a href="https://cis-india.org/internet-governance/comments-to-national-digital-health-mission-health-data-management-policy-pdf" class="internal-link" title="Comments to National Digital Health Mission: Health Data Management Policy pdf">here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/comments-to-national-digital-health-mission-health-data-management-policy'>https://cis-india.org/internet-governance/blog/comments-to-national-digital-health-mission-health-data-management-policy</a>
</p>
No publisherShweta Mohandas, Pallavi Bedi, Shweta Reddy, and Saumyaa NaiduData Governanceinternet governanceInternet GovernanceHealthcare2020-10-05T15:56:51ZBlog EntryDigital Delivery and Data System for Farmer Income Support
https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support
<b>This report, jointly published by the Centre for Internet & Society and Privacy International, highlights the digital systems deployed by the government to augment farmer income. It analyses the PM-Kisan and Kalia schemes in Odisha and Andhra Pradesh. </b>
<h2>Executive Summary</h2>
<p style="text-align: justify; ">This study provides an in-depth analysis of two direct cash transfer schemes in India – Krushak Assistance for Livelihood and Income Augmentation (KALIA) and Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) – which aim to provide income support to farmers. The paper examines the role of data systems in the delivery and transfer of funds to the beneficiaries of these schemes, and analyses their technological framework and processes.</p>
<p style="text-align: justify; ">We find that the use of digital technologies, such as direct benefit transfer (DBT) systems, can improve the efficiency and ensure timely transfer of funds. However, we observe that the technology-only system is not designed with the last beneficiaries in mind; these people not only have no or minimal digital literacy but are also faced with a lack of technological infrastructure, including internet connectivity and access to the system that is largely digital.</p>
<p style="text-align: justify; ">Necessary processes need to be implemented and personnel on the ground enhanced in the existing system, to promptly address the grievances of farmers and other challenges.</p>
<p style="text-align: justify; ">This study critically analyses the direct cash transfer scheme and its impact on the beneficiaries. We find that despite the benefits of direct benefit transfer (DBT) systems, there have been many instances of failures, such as the exclusion of several eligible households from the database.</p>
<p style="text-align: justify; ">The study also looks at gender as one of the components shaping the impact of digitisation on beneficiaries. We also identify infrastructural and policy constraints, in sync with the technological framework adopted and implemented, that impact the implementation of digital systems for the delivery of welfare. These include a lack of reliable internet connectivity in rural areas and low digital literacy among farmers. We analyse policy frameworks at the central and state levels and find discrepancies between the discourse of these schemes and their implementation on the ground.</p>
<p style="text-align: justify; ">We conclude the study by discussing the implications of datafication, which is the process of collecting, analysing, and managing data through the lens of data justice. Datafication can play a crucial role in improving the efficiency and transparency of income support schemes for farmers. However, it is important to ensure that the interests of primary beneficiaries are considered – the system should work as an enabling, not a disabling, factor. This appears to be the case in many instances since the current system does not give primacy to the interests of farmers. We offer recommendations for policymakers and other stakeholders to strengthen these schemes and improve the welfare of farmers and end users.</p>
<hr />
<p style="text-align: justify; "><a href="https://cis-india.org/internet-governance/files/digital-tools-farmers-report/at_download/file" class="external-link"><b>Click to download the full report</b></a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support'>https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support</a>
</p>
No publishersameetDigital TechnologiesData GovernanceInternet GovernancePrivacy2023-10-18T23:40:25ZBlog EntryCIS Comments on the National Strategy on Blockchain
https://cis-india.org/internet-governance/blog/cis-comments-on-the-national-strategy-on-blockchain
<b></b>
<p dir="ltr"> </p>
<p dir="ltr">This submission is a response by the researchers at CIS to the report “National Strategy on Blockchain” prepared by Ministry of Electronics and Information Technology (MEITY) under the Government of India. </p>
<p>We have put forward the following comments based on our analysis of the report.</p>
<p><strong><br /></strong></p>
<ol><li style="list-style-type: upper-roman;" dir="ltr">
<h3>General Comments on the National Strategy</h3>
</li></ol>
<ol><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">There are currently a number of reports and policies on blockchain use across departments, ministries and even states. The absence of a harmonised blockchain policy across all departments and institutions of government must be fixed. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">There are inherent dangers with viewing blockchain as a silver bullet solution. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Informational concerns with blockchain are existent and policies must be designed to reflect these concerns and minimise their occurrences. </p>
</li></ol>
<p><strong><br /></strong></p>
<ol start="2"><li style="list-style-type: upper-roman;" dir="ltr">
<h3>Section Specific Comments </h3>
</li></ol>
<ol><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr"><strong>Section 6.1</strong> - There is a need for greater decentralisation and a shift away from a solely government operated blockchain </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr"><strong>Section 6.2: </strong></p>
</li></ol>
<ul><li style="list-style-type: lower-alpha;" dir="ltr">
<p dir="ltr">The legality of blockchain also faces the hurdle of smart contracts </p>
</li><li style="list-style-type: lower-alpha;" dir="ltr">
<p dir="ltr">The RBI decision to halt the use of cryptocurrencies was struck down by the Supreme Court </p>
</li><li style="list-style-type: lower-alpha;" dir="ltr">
<p dir="ltr">The right to be forgotten exists as an extension of the right to privacy as well </p>
</li></ul>
<ol start="3"><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr"><strong>Section 7</strong> - There is a need for greater detail and granularity in the report’s analysis and in the suggestions and recommendations that it makes. </p>
</li></ol>
<div> </div>
<div>The full submission to MEITY can be found at: <a href="https://cis-india.org/internet-governance/national-strategy-on-blockchain">https://cis-india.org/internet-governance/national-strategy-on-blockchain</a></div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cis-comments-on-the-national-strategy-on-blockchain'>https://cis-india.org/internet-governance/blog/cis-comments-on-the-national-strategy-on-blockchain</a>
</p>
No publisherVipul Kharbanda & Aman NairBlockchainBitcoinCryptocurrenciesData GovernanceSubmissionsE-Governance2021-03-22T05:34:41ZBlog Entry(Updated) Information Security Practices of Aadhaar (or lack thereof): A documentation of public availability of Aadhaar Numbers with sensitive personal financial information
https://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof-a-documentation-of-public-availability-of-aadhaar-numbers-with-sensitive-personal-financial-information-1
<b>Since its inception in 2009, the Aadhaar project has been shrouded in controversy due to various questions raised about privacy, technological issues, welfare exclusion, and security concerns. In this study, we document numerous instances of publicly available Aadhaar Numbers along with other personally identifiable information (PII) of individuals on government websites. This report highlights four government projects run by various government departments that have made sensitive personal financial information and Aadhaar numbers public on the project websites.
</b>
<p> </p>
<h4>Read the updated report: <a class="external-link" href="https://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof/" target="_blank">Download</a> (pdf)</h4>
<h4>Read the first statement of clarification (May 16, 2017): <a class="external-link" href="https://cis-india.org/internet-governance/clarification-on-information-security-practices-of-the-aadhaar-report/" target="_blank">Download</a> (pdf)</h4>
<h4>Read the second statement of clarification (November 05, 2018): <a class="external-link" href="https://cis-india.org/internet-governance/blog/clarification-on-the-information-security-practices-of-aadhaar-report" target="_blank">Link to page</a> (html)</h4>
<hr />
<p><em>We are grateful to Yesha Paul and VG Shreeram for research support.</em></p>
<hr />
<p>In the last month, there have been various reports pointing out instances of the public disclosure of Aadhaar number through various databases, accessible easily on Twitter under the hashtag #AadhaarLeaks. Most of these public disclosures reported contain personally identifiable information of beneficiaries or subjects of the non UIDAI databases containing Aadhaar numbers of individuals along with other personal identifiers. All of these public disclosures are symptomatic of a significant and potentially irreversible privacy harm, however we wanted to point out another large fallout of such events, those that create a ripe opportunity for financial fraud. For this purpose, we identified benefits disbursement schemes which would require its databases to store financial information about its subjects. During our research, we encountered numerous instances of publicly available Aadhaar Numbers along with other PII of individuals on government websites. In this paper, we highlight four government projects run by various government departments with publicly available financial data and Aadhaar numbers. Our research is focussed largely on the data published by or pertaining to where Aadhaar data is linked with banking information. We chose major government programmes using Aadhaar for payments and banking transactions. We found sensitive and personal data and information very easily accessible on these portals.</p>
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof-a-documentation-of-public-availability-of-aadhaar-numbers-with-sensitive-personal-financial-information-1'>https://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof-a-documentation-of-public-availability-of-aadhaar-numbers-with-sensitive-personal-financial-information-1</a>
</p>
No publisherAmber Sinha and Srinivas KodaliDigital IDPrivacyNDSAPData ProtectionAccountabilityFeaturedData GovernanceAadhaarDigitisationHomepageInternet GovernanceData Management2019-03-13T00:29:01ZBlog EntryGlobal Governance Futures 2027 - Session 3, New Delhi
https://cis-india.org/internet-governance/news/global-governance-futures-2027-session-3-new-delhi
<b>The Global Governance Futures program (GGF) initiated by Global Public Policy Institute and supported by Robert Bosch Stiftung brings together young professionals to look ahead ten years and recommend ways to address global challenges. Sumandro Chattapadhyay will join Ankhi Das (Facebook) and Arun Mohan Sukumar (Observer Research Foundation) on Tuesday, January 17, to discuss the "data governance" scenarios developed by the GGF 2027 Fellows.
</b>
<p> </p>
<h4>About the Programme: <a href="http://www.ggfutures.net/about/ggf-program/">External Link</a>.</h4>
<h4>GGF 2027 Fellows: <a href="http://www.ggfutures.net/current-fellows/">External Link</a>.</h4>
<h4>GGF 2027 Session 3, New Delhi - Agenda: <a href="http://cis-india.org/internet-governance/files/ggf-2027-session-3-new-delhi-agenda/at_download/file">Download</a> (PDF).</h4>
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/global-governance-futures-2027-session-3-new-delhi'>https://cis-india.org/internet-governance/news/global-governance-futures-2027-session-3-new-delhi</a>
</p>
No publishersumandroPrivacyInternet GovernanceData GovernanceE-GovernanceDigital Rights2017-01-15T11:46:27ZBlog EntryInputs to the Report on the Non-Personal Data Governance Framework
https://cis-india.org/raw/inputs-to-report-on-non-personal-data-governance-framework
<b>This submission presents a response by researchers at the Centre for Internet and Society, India (CIS) to the draft Report on Non-Personal Data Governance Framework prepared by the Committee of Experts under the Chairmanship of Shri Kris Gopalakrishnan. The inputs are authored by Aayush Rathi, Aman Nair, Ambika Tandon, Pallavi Bedi, Sapni Krishna, and Shweta Mohandas (in alphabetical order), and reviewed by Sumandro Chattapadhyay.</b>
<p> </p>
<h4>Text of submitted inputs: <a href="https://cis-india.org/raw/files/cis-inputs-to-report-on-non-personal-data-governance-framework" target="_blank">Read</a> (PDF)</h4>
<h4>Report by the Committee of Experts on Non-Personal Data Governance Framework: <a href="https://static.mygov.in/rest/s3fs-public/mygov_159453381955063671.pdf" target="_blank">Read</a> (PDF)</h4>
<hr />
<h2>Inputs</h2>
<h3>Clause 3.7 (v): The role of the Indian government in the operation of data markets</h3>
<p>While highlighting the potential for India to be one of the top consumer and data markets of the world, it also sheds light on the concern about the possibility of data monopolies. The clause envisions the role of the Indian government as a regulator and a catalyst for domestic data markets.</p>
<p>In doing so, the clause does not acknowledge that the proactive and dominant roles of the Indian government in generation and reuse of data, based on the existing data collection practices, as well as the provisions that have been given, as under the compulsory sharing provisions in the Report, and would continue to be given by the Personal Data Protection Bill. In reality, the Indian government’s role is not just of a catalyst but also of a key player, potentially with monopolistic market power, in the domestic data market, especially due to the ongoing data marketplace initiatives as detailed in published policy and vision documents. [1]</p>
<h3>Clause 3.8 (iv): Introducing collective privacy</h3>
<p>The introduction of collective privacy has initiated an overdue discussion at the policy level to arrive at privacy formulations that account for limitations in the contemporary dominant social, legal and ethical paradigms of privacy premised on individual interests and personal harm. The notion of collective privacy has garnered contemporary attention with the rise of data processing technologies and business models that thrive on the collection and processing of aggregate information.</p>
<p>While the Report acknowledges that collective privacy is an evolving concept, it doesn’t attempt to define either collective or what privacy could entail in the context of a collective. The postulation of collective privacy as a legally binding right is bereft with challenges in both domestic and international legal frameworks. [2]</p>
<p>Central to these challenges is the representation of the group of the entity. While the Report illustrates harms that may be incurred by certain collectives that collective privacy could protect against, these illustrated collectives are already recognised in law as rights-holding groups (society members, for example), and/or share pre-determined attributes (sexual orientation, for example).</p>
<p>The Report does not acknowledge that the very technological processes that may have rendered the articulation of collective privacy necessary, also are intended to create ad-hoc and newer sets of individuals or groups with shared attributes. [3] In doing so, the Report furthers an ontology of groups having intuitive, predetermined attributes that exist naturally, or in law, whereas the intervention of data collection and processing technologies can determine shared group attributes afresh. Moreover, the Report also ignores that predetermined attributes are static, and in doing so, ignores a vast existing literature speaking to fluidity of identities and the intersectionality of identities that individuals in groups occupy. [4] We fully appreciate the challenges these pose in the determination of the legal contours of collective privacy. Much of the Report’s recommendations are premised on the idea of a predetermined collective, rendering more granular exploration of these ideas urgent.</p>
<p>Further, the Report also puts forth a limited conception of privacy as a safeguard against data-related harms that may be caused to collectives. In doing so, it dilutes the conceptualisation of individual privacy as articulated in Justice K. S. Puttaswamy (Retd.) and Anr. vs Union Of India And Ors. Notwithstanding this dilution, the illustrations also only indicate harms that may be caused by private actors. Any further recommendations should envision the harms that may also be caused by public data-driven processes, such as those incubated within the state machinery.</p>
<h3>Clause 4.1 (iii) and Recommendation 1: Defining Non-Personal Data</h3>
<p>The Report proposes the definition of non-personal data to include (i) data that was never related to an identified or identifiable natural person, and (ii) aggregated, anonymised personal data such that individual events are “no longer identifiable”. In doing so, they have attempted to extend protections to categories of data that fall outside the ambit of the Personal Data Protection Bill, 2019 (hereafter “PDP Bill”). The Report is cognizant of the fallible nature of anonymization techniques but fails to indicate how these may be addressed.
The test of anonymization in regarding data as non-personal data requires further clarification. Anonymization, in and of itself, is an ambiguous standard. Scholarship has indicated that anonymised data may never be completely anonymous. [5] Despite this, the PDP Bill proposes a high threshold of zero-risk of anonymization in relation to personal data, to mean “such irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified”. From a plain reading, it appears that the Report proposes a lower threshold of the anonymization requirements governing non-personal data. It is unclear how non-personal data would then be different from inferred data as described within the definition of personal data under the PDP Bill. This adds regulatory uncertainty making it imperative for the Committee to articulate bright-line, risk-based principles and rules for the test of anonymization. Such rules should also indicate the factors that ought to be taken into account to determine whether anonymization has occurred and the timescale of reference for anonymization outcomes. [6]</p>
<p>The recommendation also states that the data principal should "also provide consent for anonymisation and usage of this anonymized data while providing consent for collection and usage of his/her personal data". However the framing of this recommendation fails to mention the responsibility of the data fiduciary to provide notice to the data principal about the usage of the anonymized data while seeking the data principal’s consent for anonymization. The notice provided to the data principal should provide clear indication that consent of the data principal is based on their knowledge of the use of the anonymized data.</p>
<h3>Clause 4.8 (i), (ii): Function of data custodians</h3>
<p>The Report does not make it clear who may perform the role of data custodians. The use of data fiduciary indicates the potential import of the definition of ‘data fiduciary’ as specified under Clause 3.13 of the PDP Bill. However, this needs to be further clarified.</p>
<h3>Clause 4.8 (iii): Data custodians’ “duty of care”</h3>
As is outlined in the following section on data trustees, it can be difficult for a singular entity to maintain a duty of care and undertake actions with the best interest of a community when that community consists of sub-communities that may be marginalised.
Further, ‘duty of care’, ‘best interest’, and ‘absence of harm’ are not sufficient standards for data processing by data custodians. Recommendations to the effect of obligating data custodians to uphold the rights of data principals, including economic and fundamental rights need to be incorporated in the framework.
<h3>Clause 4.9: Data trustees</h3>
<p>The committee’s suggestion that the “most appropriate representative body” should be the data trustee—that often being either the corresponding government entity or community body— is reasonable at face value. However, in the absence of any clear principles defining what constitutes “most appropriate” there are a number of potential issues that can appear:</p>
<p><strong>Lack of means for selecting a data trustee:</strong> The report makes note of the fact that both private and public entities can be selected to be data trustees but offers no principles on how these data trustees can be selected, i.e. whether they are to be directly selected by the members of a community, and if so how. Any selection criteria or process prescribed has to keep in mind the following point regarding the potential lack of representation for marginalised communities that could arise from a direct selection of a data trustee by a group of people.</p>
<p><strong>Issues of having a single data trustee for large scale communities and when dealing with marginalised communities:</strong> The report assumes that in instances wherein a community is spread across a geographic region, or consists of multiple sub-communities, then the data trustee will be the closest shared government authority (for example, the Ministry of Health and Family Welfare, Government of India being the data trustee for data regarding diabetes among Indian citizens).</p>
<p><strong>This idea of a singular data trustee assumes that the ‘best interests’ of a community are uniform across that community. This can prove problematic especially when dealing with data obtained from marginalised communities that forms a part of a wider dataset.</strong> It is entirely possible to imagine that a smaller disenfranchised community may have interests that are not aligned with the general majority. In such a situation the Report is unclear as to whether the data trustee would have to ensure that the best interests of all groups are maintained, or would they be responsible for ensuring the best interests of the largest number of people within that community.
There are power differentials between citizens, government agencies, and other entities described by the Report. This places citizens at risk of abuse of power by government entities in their role as trustees, who are effectively being empowered through this policy framework as opposed to a representative mechanism. It is recommended that data trustees be appointed by relevant communities through clear and representative mechanisms. Additionally, any individual should be able to file complaints regarding the discharge of community trust by data trustees. This is necessary as any subsequent rights vested in the community can only be exercised through the data trustee, and become unenforceable in the lack of an appropriate data trustee.</p>
<p>Any legislation that arises on the basis of this report will therefore have to not only provide a means for selecting the data trustee, but also safeguards for ensuring that data collected from marginalised communities are used keeping in mind their specific best interests—with these best interests being informed through consultation with that community.</p>
<h3>Clause 4.10 (iii): Data trusts</h3>
<p>Section 4.10 (iii) notes that data custodians may voluntarily share data in these data trusts. However it is unclear if such sharing must be done with the express consent of the relevant data trustee.</p>
<h3>Clause 4.10 (iv): Mandatory sharing and competition</h3>
<p>The fundamental premise of a mandatory data sharing regime seems increasingly distant from its practical impacts. The EU which earlier championed the cause now seems reluctant to further it on the face of studies which skews towards counteractive impacts of such steps. Such steps could apply to huge volumes of first-party data companies collect on their own assets, products and services, even though such data are among the least likely to create barriers to entry or contribute to abuses of dominant positions. [7] This is hence likely to bring in more chilling effect on innovation and investment than a pro-competition environment. The velocity of big data also adds to the futility of such data sharing mandates. [8] It is recommended that a sectoral analysis of this mandate be undertaken instead of an overarching stipulation.</p>
<p>The Report suggests extensive data sharing without addressing the extent of obligation on the private players to submit to these requests and process them. The availability of meta-data about the data collected may be made easily accessible under mandates of transparency. However, the access to the detailed underlying data will be difficult in most cases due to the current structure of entities functioning in cyberspace, evidenced by the lack of compliance to such mandates by Courts of Law in the EU. Such a system can easily eliminate the comparative advantage of smaller players, helping larger players with more money at their disposal enabling their growth and throttling the smaller players. It could have serious implications on data quality and integrity through the sharing of erroneous data. Access to superior quality digital services in India may also have to be compromised. If this regime is furthered without amends to address these concerns, it might end up counter productive.</p>
<h3>Clause 5.1 (iv): Grievance redressal against state’s role</h3>
<p>This clause acknowledges the vast potential for government authorities and other bodies to abuse their power as data trustee. In addition, it should describe the setting up of impartial and accessible mechanisms for citizens to complain against such abuse of power and appropriate penalties, including the removal of the data trustee.</p>
<h3>Chapter 7, Recommendation 5: Purpose of data-sharing</h3>
<p>Recommendation 5 leaves scope for “national security” as a sovereign purpose for data sharing. This continues to be in line with the trend of having an overarching national security clause, as in the Personal Data Protection Bill, 2019. There could be provisions made to enable access to data for sovereign purposes without such broad definition, replacing it based on constitutional terms which will limit it to the confines laid down in the Constitution. This will effectively curb any misuse of the provision and strongly embed the proposed regulation of non-personal data on constitutional ethos. This can also prevent future conflicts with the fundamental rights.</p>
<p>Platform companies have leveraged their position in society to take on an ever-greater number of quasi-public functions, exercising new forms of unaccountable, transnational authority. It is not difficult to imagine that this trend can continue to non-platform companies, or even taken forward by these very entities which also have access to a large chunk of non-personal data. A strict division between sovereign purposes and core public interest purposes seems difficult. However, it is imperative to have a clearer definition of core public interest purposes and sovereign purposes. The broad based definition may facilitate reduced accountability. Separating government actions from sovereign purposes could bring forth the power imbalance between the State and its people, while in the case of the non-governmental entities, it will facilitate encroachment of government functions by private players. Both these cases may not consider the best interest of the data generators, or the people at large.</p>
<h3>Clause 7.1 (i): Data needs of law enforcement</h3>
<p>Clause 7.1 (i) allows for acquisition of data governed by this framework for crime mapping, devising anticipation and preventive measures, and for investigations and law enforcement. While this may be necessary to be granted to law enforcement in certain cases, this should happen only with an express permission of a court of law. Blanket executive access allows higher possibility of misuse by the people involved in law enforcement.</p>
<h3>Clause 7.2 (iv): Use of health data as a pilot</h3>
<p>The clause suggests the use of health sector data as a pilot use-case. This is highly undesirable due to the inherent nature of high sensitivity of the larger part of data related to the health sector. The high vulnerability of such data to harm the data principals should act as a deterrent in using this as the pilot use-case. Given the mass availability of data related to the health sector due to the pandemic, it creates further points of vulnerabilities which can be illegally monetised and misappropriated. It is recommended that this proposal be scrapped altogether.</p>
<h3>Clause 7.2 (iii): Power of government bodies</h3>
<p>As per this clause, data trustees or government bodies (who could also be acting as data trustees) can make requests for data sharing and place such data in appropriate data infrastructures or trusts. This presents a conflict of interest, as a data trust or government body can empower itself to be the data trustee. Such cases should be addressed within the scope of the framework.</p>
<h3>Clause 8.2 (vii): Level-playing field for all Indian actors</h3>
<p>In terms of this clause the “Non-Personal Data Authority (Authority) will ensure a level playing field for all Indian actors to fulfil the objective of maximising Indian data’s value to the Indian economy”. The emphasis on ensuring a level playing field for only Indian actors instead of non-discriminatory platform for all concerned actors irrespective of the country/nationality of the actor has the potential of violating India’s trade obligations under the WTO. Member states of the WTO are essentially restricted from discriminating between products and services coming from different WTO Members, and between foreign and domestic products and services unless they can avail of exceptions. There is also no clarity on what constitutes ‘Indian Actors’, would a Multi-National Corporation with its headquarters in a foreign State, but its subsidiaries in India also come within its ambit.</p>
<h3>Clause 8.2 (x): Composition of the Authority</h3>
<p>Clause 8.2 (x) states that the Authority will have some members with relevant industry experience. However, apart from this clause, the report is silent on the composition of the Authority. The report recognises that Authority will need individuals/organisations with specialised knowledge, i.e. data governance, technology, latest research and innovation in the field of non-personal data), however, it does not mention or refer to the role of civil society organisations and the need for representation from such organisations in the Authority.</p>
<p>The report frequently alludes to non-personal data being used for the best interest of the data principal and therefore, it is essential that the composition of the Authority reflect the inherent asymmetry of power between the data principal and the State. Considering that the Authority will also be responsible for sharing of community data and with determining the code of conduct for sharing of such data, it is important that the Authority also has adequate representation from civil society organisations along with groups or individuals having the necessary technological and legal skills.</p>
<h3>Clause 8.2 (iii) and (vi): Roles and Responsibility of the Authority</h3>
<p>A majority of the datasets in the country comprise of ‘mixed datasets’, i.e. it consists of both personal and non-personal data. However, there is lack of clarity about the coordination between the Data Protection Authority constituted under the PDP Bill and the Non-Personal Data Authority with regard to the regulation of such datasets. The Report refers to the European Union which provides that the Non-Personal Data Regulation applies to the Non-Personal Data of mixed datasets; if the Non-Personal Data part and the personal data parts are ‘inextricably linked’, the General Data Protection Regulation apply to the whole mixed dataset. However, it is unclear whether the Report also proposes the same mechanism for the regulation of mixed datasets.</p>
<p>Further, the contours of the enforcement role of the Committee should be specified and clearly laid down. Will the Committee also have penal powers as prescribed for the Data Protection Authority under the PDP Bill? Also, will the privacy concerns emanating from the risk of re-anonymisation of data be addressed by the NPD Committee or by the DPA under the PDP Bill. Ideally, it should be specified that any such privacy concerns will fall within the domain of the DPA as the data is then converted into personal data and the DPA will be empowered to deal with such issues.</p>
<h3>Endnotes</h3>
<p>[1] See Ministry of Health and Family Welfare. (2020). National Digital Health Blueprint. Government of India. <a href="https://main.mohfw.gov.in/sites/default/files/Final%20NDHB%20report_0.pdf">https://main.mohfw.gov.in/sites/default/files/Final%20NDHB%20report_0.pdf</a>; Tandon, A. (2019). Big Data and Reproductive Health in India: A Case Study of the Mother and Child Tracking System. <a href="https://cis-india.org/raw/big-data-reproductive-health-india-mcts">https://cis-india.org/raw/big-data-reproductive-health-india-mcts</a></p>
<p>[2] Taylor, L., Floridi, L., van der Sloot, B. eds. (2017) Group Privacy: new challenges of data technologies. Dordrecht: Springer.</p>
<p>[3] Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philos. Technol. 30, 475–494.</p>
<p>[4] See Taylor, L., Floridi, L., van der Sloot, B. eds. (2017) Group Privacy: new challenges of data technologies. Dordrecht: Springer; Tisne, M. (n.d). The Data Delusion: Protecting Individual Data Isn't Enough When The Harm is Collective. Stanford Cyber Policy Centre. <a href="https://cyber.fsi.stanford.edu/publication/data-delusion">https://cyber.fsi.stanford.edu/publication/data-delusion</a></p>
<p>[5] Rocher, L., Hendrickx, J.M. & de Montjoye, Y. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nat Commun 10, 3069 . <a href="https://doi.org/10.1038/s41467-019-10933-3">https://doi.org/10.1038/s41467-019-10933-3</a></p>
<p>[6] Finck, M. & Pallas, F. (2020). They who must not be identified—distinguishing personal from non-personal data under the GDPR. International Data Privacy Law, 10 (1), 11–36. <a href="https://doi.org/10.1093/idpl/ipz026">https://doi.org/10.1093/idpl/ipz026</a></p>
<p>[7] European Commission (2020). Communication From The Commission To The European Parliament, The Council, The European Economic And Social Committee And The Committee Of The Regions: A European strategy for data. <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1593073685620&uri=CELEX:52020DC0066">https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1593073685620&uri=CELEX:52020DC0066</a></p>
<p>[8] Modrall, Jay. (2019). Antitrust risks and Big Data. Norton Rose Fullbright. <a href="https://www.nortonrosefulbright.com/en-in/knowledge/publications/64c13505/antitrust-risks-and-big-data">https://www.nortonrosefulbright.com/en-in/knowledge/publications/64c13505/antitrust-risks-and-big-data</a></p>
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/raw/inputs-to-report-on-non-personal-data-governance-framework'>https://cis-india.org/raw/inputs-to-report-on-non-personal-data-governance-framework</a>
</p>
No publishersumandroData SystemsPrivacyResearchers at WorkDigital EconomyData GovernanceSubmissions2020-12-30T09:40:52ZBlog EntryThe Government needs to make sure our emails don't destroy the environment
https://cis-india.org/internet-governance/blog/the-government-needs-to-make-sure-our-emails-dont-destroy-the-environment
<b>The Government's data centre policy must be more reflective of energy requirements and sustainable practices to effectively ensure that India's growing digital user base doesn't hurt the environment. </b>
<p dir="ltr"> </p>
<p dir="ltr">Ask people to name the first things they think of when you say climate change and you can expect a few standard answers. Polar bears on shrinking ice caps, cities suffocated from car exhaust fumes and mass deforestation are all surely to be somewhere on the list of responses. What you probably won’t find, however, is people discussing their social media. Or their email. Or any piece of the immeasurable amount of data that we produce on the internet on a daily basis. Yet all of this data is far from green, and is substantially increasing our carbon footprint. So the question arises, how is our data contributing to climate change, and what can policy makers do about it? </p>
<p>There is a tendency to focus on the turnover of hardware when discussing the climate impact of digital technology. And while this is an important element of the sector’s impact, it is essential that policymakers also recognise the impact of intangible elements of the digital ecosystem - such as data. Every piece of data that is created or transmitted across the internet has an environmental cost. That cost being the energy required (and by extension the fossil fuel amount used) to operate the technology that hosts and transports the data. </p>
<p>Admittedly, the environmental impact and cost of one person checking their instagram or even reading this article is quite low. But aggregated across the estimated number of internet users in the <a href="https://www.tvtechnology.com/news/global-digital-population-grows-to-48b-in-2020">world</a>, digital technologies are estimated to be responsible for <a href="https://www.bbc.com/future/article/20200305-why-your-internet-habits-are-not-as-clean-as-you-think#:~:text=If%20we%20were%20to%20rather,of%20carbon%20dioxide%20a%20year.">1.7 billion tonnes of greenhouse gases</a> - which is about 4% of the global greenhouse gas production and roughly how much is produced by the global airline industry.</p>
<p>Another key element of data’s environmental impact is the establishment and operation of data centres. Data centres are establishments that house computing and ICT equipment. These centres are critical infrastructure components to the functioning of the internet and are used to store an immense volume of data. As the number of data centres has <a href="https://www.datacenterknowledge.com/industry-perspectives/data-center-dilemma-our-data-destroying-environment">exploded over the last decade</a>, they have come to account for 1% all global greenhouse gas production on their own, and are expected to contribute to <a href="https://www.computerworld.com/article/3431148/why-data-centres-are-the-new-frontier-in-the-fight-against-climate-change.html">14% of all emissions by 2040</a>.<br /><br /></p>
<h3>India’s growing data centre problem </h3>
<p><br />As the number of Internet users in India <a href="https://www.livemint.com/industry/media/india-s-active-internet-user-base-to-hit-639-mn-by-year-end-11588879564767.html">grows</a> at an exponential rate, it is imperative that the government take a proactive approach to creating sustainable infrastructure that can meet the ICT demands of the population. </p>
<p>Recently, the Ministry of Electronics and Information technology, released its draft policy on data centres. The policy outlined the government’s aim at establishing a large number of domestic data centres that will be used to store all data created within the country. The government’s policy envisions India as being one of the world leaders in data centre establishment and operation - on a par with countries such as <a href="https://www.eco-business.com/news/the-future-of-data-centres-in-the-face-of-climate-change/">Singapore who now hold that mantle</a>. </p>
<p>However, despite presenting this grand vision, the policy provides no specifics on how it plans to cope with the environmental stress that these new centres would bring. The policy states that ensuring uninterrupted power to these centres will be a key priority of the government - a burden that would be far beyond the capacity of current renewable energy sources in the country. Taking the example of Singapore, almost <a href="https://www.eco-business.com/news/the-future-of-data-centres-in-the-face-of-climate-change/">7% of all electricity consumption</a> in the country was from data centres. Such proportionate consumption by Indian data centres would realistically only be possible through an expanded use of fossil fuel generated electricity. </p>
<p dir="ltr">To give the policy some credit, it does mention ‘encouraging’ the use of renewable energy for data centres but fails to mention any specific schemes or measures to ensure renewable energy investment and growth is enough to keep up with growing data centre energy demands. <br /><br /></p>
<h3>What can policy makers do? </h3>
<p><br />The question arises, how can policy makers make data centres more sustainable? Is there any way of reducing the energy consumption of these data centres? </p>
<p>In short, not really right now. It has been estimated that <a href="https://www.computerworld.com/article/3431148/why-data-centres-are-the-new-frontier-in-the-fight-against-climate-change.html">40% of total energy consumption by data centres is used in cooling</a>. And while there is the possibility that building these data centres in cooler environments would reduce these costs - converting shimla, coorg, ooty and other cool weathered hill stations into monuments of data centre infrastructure does not seem particularly practical. And so short of investing heavily into research and development for the future and conforming to global standards of data centre operation, there is not much the government can do now outside of focusing on the source of the energy that is used by these centres. </p>
<p>Keeping this in mind, the first step in evolving India’s data infrastructure has to be investing in and developing clear schemes for promoting renewable energy in the country. While India has seen positive growth in renewable energy infrastructure, it would require substantial private and public investment in order to meet its target of <a href="https://energy.economictimes.indiatimes.com/news/renewable/opinion-is-indias-renewable-energy-investment-on-track/76229607">450 GW of renewable energy by 2021</a>. Widespread development of data centres would only further stress India’s energy needs and would therefore require a commensurate increase in the amount of renewable energy available. As such it is imperative that the state not stick to vague statements of ‘encouraging renewable energy’ or ‘collaborating between ministries’ and rather adopt a revised policy for developing renewable energy for digital infrastructure. </p>
<p> Such a step would ensure the sustainability of the country’s digital infrastructure, and ensure that every Indian has access to both clean air and their email. </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-government-needs-to-make-sure-our-emails-dont-destroy-the-environment'>https://cis-india.org/internet-governance/blog/the-government-needs-to-make-sure-our-emails-dont-destroy-the-environment</a>
</p>
No publisheramanClimate changeEnvironmental ImpactEnvironmentData GovernanceData CentresData Management2021-01-25T14:17:29ZBlog EntryDemystifying Data Breaches in India
https://cis-india.org/internet-governance/blog/demistifying-data-breaches-in-india
<b>Despite the rate at which data breaches occur and are reported in the media, there seems to be little information about how and when they are resolved. This post examines the discourse on data breaches in India with respect to their historical forms, with a focus on how the specific terminology to describe data security incidents has evolved in mainstream news media reportage.
</b>
<p>Edited by Arindrajit Basu and Saumyaa Naidu</p>
<hr />
<p dir="ltr" style="text-align: justify; ">India saw a <a href="https://theprint.in/india/despite-62-drop-in-data-breaches-india-among-top-5-nations-targeted-by-hackers-study-finds/917197/">62% drop in data breaches in the first quarter of 2022</a>. Yet, it ranked fifth on the list of countries most hit by cyberattacks according to a 2022 <a href="https://surfshark.com/blog/data-breach-statistics-by-country">report by Surfshark</a>, a Netherlands-based VPN company. Another report <a href="https://analyticsindiamag.com/the-ridiculous-17-5-cr-for-a-data-breach/">on the cost of data breaches researched by the Ponemon Institute and published by IBM</a> reveals that the breach of about 29500 records between March 2021 and March 2022 resulted in a 25% increase in the average cost from INR 165 million in 2021 to INR 176 million in 2022.</p>
<p style="text-align: justify; "><span>These statistics are certainly a cause for concern, especially in the context of India’s rapidly burgeoning digital economy shaped by the pervasive platformization of private and public services such as welfare, banking, finance, health, and shopping among others. Despite the rate at which data breaches occur and are reported in the media, there seems to be little information about how and when they are resolved. This post examines the discourse on data breaches in India with respect to their historical forms, with a focus on how the specific terminology to describe data security incidents has evolved in mainstream news media reportage.</span></p>
<p style="text-align: justify; "><span>While expert articulations of cybersecurity in general and data breaches in particular tend to predominate the public discourse on data privacy, this post aims to situate broader understandings of data breaches within the historical context of India’s IT revolution and delve into specific concepts and terminology that have shaped the broader discourse on data protection. The late 1990s and early 2000s offer a useful point of entry into the genesis of the data security landscape in India.</span></p>
<h3><span></span><span>Data Breaches and their Predecessor Forms</span></h3>
<p style="text-align: justify; "><span></span><span>The articulation of data security concerns around the late 1990s and early 2000s isn’t always consistent in deploying the phrase, ‘data breach’ to signal cybersecurity concerns in India. The terms such as ‘data/ identity theft’ and ‘data leak’ figure prominently in the public articulation of concerns with the handling of personal information by IT systems, particularly in the context of business process outsourcing (BPO) and e-commerce activities. Other pertinent terms such as “security breach”, “data security”, and ‘“cyberfraud” also capture the specificity of growing concerns around outsourced data to India. At the time, i.e. around mid-2000s regulatory frameworks were still evolving to accommodate and address the complexities arising from a dynamic reconfiguration of the telecommunications and IT landscape in India.</span></p>
<p dir="ltr" style="text-align: justify; ">Some of the formative cases that instantiate the usage of the aforementioned terms are instructive to understand shifts in the reporting of such incidents over time. The earliest case during that period concerns<a href="https://www.stop-source-code-theft.com/source-code-theft-cases-in-india/"> a 2002 case concerning the theft and sale of source code</a> by an IIT Kharagpur student who intended to sell the code to two undercover FBI agents who worked with the CBI to catch the thief. A straightforward case of data theft was framed by media stories around the time as a <a href="https://timesofindia.indiatimes.com/iitian-held-for-stealing-software-source-code/articleshow/20389713.cms">cybercrime involving the illegal sale</a> of the source code of a software package, as <a href="https://economictimes.indiatimes.com/ip-laws-lax-but-us-firm-bets-on-india/articleshow/696197.cms?from=mdr">software theft of intellectual property in the context of outsourcing</a> and as an instance of <a href="https://www.computerworld.com/article/2573515/at-risk-offshore.html">industrial espionage in poor nations without laws protecting foreign companies</a>. This case became the basis of the earliest calls for the protection of data privacy and security in the context of the Indian BPO sector. The Indian IT Act, 2000 at the time only covered <a href="http://pavanduggal.com/wp-content/uploads/2016/01/India-Responds-to-Growing-Concerns-Over-Data-Security.pdf">unauthorized access and data theft from computers and networks without any provisions for data protection, interception or computer forgery</a>. The BPO boom in India brought with it <a href="https://blj.ucdavis.edu/archives/vol-6-no-2/offshore-outsourcing-to-india.html">employment opportunities for India’s English-speaking, educated youth but in the absence of concrete data privacy legislation</a>, the country was regarded as an unsafe destination for outsourcing aside from the political ramifications concerning the loss of American jobs.</p>
<p dir="ltr" style="text-align: justify; ">In a major 2005 incident, employees of the Mphasis BFL call centre in Pune extracted sensitive bank account information of Citibank’s American customers to divert INR 1.90 crore into new accounts set up in India. The media coverage of this incident calls it <a href="https://www.indiatoday.in/magazine/economy/story/20050502-pune-call-centre-fraud-rattles-india-booming-bpo-sector-787790-2005-05-01">India’s first outsourcing cyberfraud and a well planned scam</a>, a <a href="https://economictimes.indiatimes.com/mphasis-call-centre-fraud-net-widens/articleshow/1077097.cms">cybercrime in a globalized world</a>, and a case of <a href="https://timesofindia.indiatimes.com/home/sunday-times/deep-focus/indias-first-bpo-scam-unraveled/articleshow/1086438.cms">financial fraud and a scam</a> that required no hacking skills, and a <a href="https://www.infoworld.com/article/2668975/indian-call-center-workers-charged-with-citibank-fraud.html">case of data theft and misuse</a>. Within the ambit of cybercrime, media reports of these incidents refer to them as cases of “fraud”, “scam” and “theft''.</p>
<p dir="ltr" style="text-align: justify; ">Two other incidents in 2005 set the trend for a critical spotlight on data security practices in India. In a <a href="http://news.bbc.co.uk/2/hi/south_asia/4619859.stm">June 2005 incident, an employee of a Delhi-based BPO firm, Infinity e-systems, sold the account numbers and passwords of 1000 bank customers </a>to the British Tabloid, The Sun. The Indian newspaper, Telegraph India, carried an online story headlined, “<a href="https://www.telegraphindia.com/india/bpo-blot-in-british-backlash-indian-sells-secret-data/cid/873737">BPO Blot in British Backlash: Indian Sells Secret Data</a>,” which reported that the employee, Kkaran Bahree, 24, was set up by a British journalist, Oliver Harvey. Harvey filmed Bahree accepting wads of cash for the stolen data. Bahree’s theft of sensitive information is described both as a data fraud and a leak in the above 2005 BBC story by Soutik Biswar. Another story on the incident calls it a “<a href="https://www.rediff.com/money/2005/jun/24bpo3.htm">scam” involving the leakage of credit card information</a>. The use of the term ‘leak’ appears consistently across other media accounts such as a <a href="https://timesofindia.indiatimes.com/city/delhi/esearch-bpo-employee-sacked-still-missing/articleshow/1153017.cms">2005 story on Karan Bahree in the Times of India</a> and another story in the Economic Times about the Australian Broadcasting Corporation’s (ABC) sting operation similar to the one in Delhi, describing the scam by the <a href="https://economictimes.indiatimes.com/hot-links/bpo/karan-bahree-part-ii-shot-in-australia/articleshow/1201347.cms?from=mdr">fraudsters as a leak</a> of the online information of Australians. Another media account of the coverage describes the incident in more generic terms such as an “<a href="https://www.tribuneindia.com/2005/20050625/edit.htm">outsourcing crime</a>”.</p>
<p dir="ltr" style="text-align: justify; ">The other case concerned <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781315610689-16/political-economy-data-security-bpo-industry-india-alan-chong-faizal-bin-yahya">four former employees of Parsec technologies who stole classified information and diverted calls from potential customers</a>, causing a sudden drop in the productivity of call centres managed by the company in November 2005. Another call centre <a href="http://news.bbc.co.uk/1/hi/uk/7953401.stm">fraud came to light in 2009 through a BBC sting operation in which British reporters went to Delhi </a>and secretly filmed a deal with a man selling credit card and debit card details obtained from Symantec call centres, which sold software made by Norton. This BBC story uses the term “breach” to refer to the incident.</p>
<p dir="ltr">In the broader framing of these cases generally understood as cybercrime, which received transnational media coverage, the terms “fraud”, “leak”, “scam”, and “theft” appear interchangeably. The term “data breach” does not seem to be a popular or common usage in these media accounts of the BPO-related incidents. A broader sense of breach (of confidentiality, privacy) figures in the media reportage in <a href="https://economictimes.indiatimes.com/hot-links/bpo/cyber-crimes-can-the-west-trust-indian-bpos/articleshow/1157115.cms?from=mdr">implicitly racial terms of cultural trust</a>, as a matter of <a href="https://www.news18.com/news/business/bpo-staff-need-ethical-training-poll-248442.html">ethics and professionalism</a> and in the <a href="https://www.news18.com/news/business/sting-op-may-spell-doom-for-bpos-248260.html">language of scandal </a>in some cases.</p>
<p dir="ltr" style="text-align: justify; ">These early cases typify a specific kind of cybercrime concerning the theft or misappropriation of outsourced personal data belonging to British or American residents. What’s remarkable about these cases is the utmost sensitivity of the stolen personal information including financial details, bank account and credit/debit card numbers, passwords, and in one case, source code. While these cases rang the alarm bells on the Indian BPO sector’s data security protocols, they also directed attention to concerns around <a href="https://economictimes.indiatimes.com/hot-links/bpo/cyber-crimes-can-the-west-trust-indian-bpos/articleshow/1157115.cms?from=mdr">the training of Indian employees on the ethics of data confidentiality and vetting through psychometric tests</a> for character assessment. In the wake of these incidents, the National Association of Software and Service Companies (NASSCOM), an Indian non-governmental trade and advocacy group,<a href="https://www.computerworld.com/article/2547959/outsourcing-to-india--dealing-with-data-theft-and-misuse.html"> launched a National Skills Registry for IT professionals to enable employers to conduct background checks</a> in 2006.</p>
<p dir="ltr" style="text-align: justify; ">These data theft incidents earned India a global reputation of an unsafe destination for business process outsourcing, seen to be lacking both, a culture of maintaining data confidentiality and concrete legislation for data protection at the time. Importantly, the incidents of data theft or misappropriation were also traceable back to a known source, a BPO employee or a group of malefactors, who often sold sensitive data belonging to foreign nationals to others in India.</p>
<p dir="ltr" style="text-align: justify; ">The phrase “data leak” also caught on in another register in the context of the widespread use of camera-equipped mobile phones in India. The 2004 Delhi MMS case offers an instance of a date leak, recapitulating the language of scandal in moralistic terms.</p>
<h3 dir="ltr">The Delhi MMS Case</h3>
<p dir="ltr" style="text-align: justify; ">The infamous 2004 incident involved two underage Delhi Public School (DPS) students who recorded themselves in a sexually explicit act on a cellular phone. After a fall out, the male student passed the low-resolution clip on to his friend in which his female friend’s face is seen. The clip, distributed far and wide in India, ended up on the famous e-shopping and auction website, bazee.com leading to <a href="https://indiancaselaw.in/avnish-bajaj-vs-state-dps-mms-scandal-case/">the arrest of the website’s CEO Avinash Bajaj for hosting the listing for sale</a>. Another similar case in 2004 mimicked the mechanics of visual capture through hand-held MMS-enabled mobile phones. A two-minute MMS of a top South-Indian actress <a href="https://timesofindia.indiatimes.com/india/web-of-sleaze-now-nude-video-of-top-actress/articleshow/966048.cms">taking a shower went viral on the Internet in 2004, the year when another MMS of two prominent Bollywood actors kissing</a> had already done the rounds. The <a href="https://www.journals.upd.edu.ph/index.php/plaridel/article/view/2392">MMS case also marked the onset of a national moral panic around the amateur uses of mobile phone technologies</a>, capable of corrupting young Indian minds under a sneaky regime of new media modernity. The MMS case, not strictly the classic case of a data breach - non-visual information generally stored in databases - became an iconic case of a data leak framed in the media as <a href="https://www.telegraphindia.com/india/scandal-in-school-shakes-up-delhi/cid/1667531">a scandal that shocked the country</a>, with calls for the regulation of mobile phone use in schools. The case continued its scandalous afterlife in a <a href="https://www.heraldgoa.in/Edit/dev-ds-leni-has-a-dps-mms-scandal-connection-/21344">2009 Bollywood film, Dev D</a> and another <a href="https://indianexpress.com/article/entertainment/entertainment-others/delhi-mms-scandal-inspires-dibakars-love-sex-aur-dhoka/">2010 film, Love, Sex and Dhokha</a>,</p>
<p dir="ltr" style="text-align: justify; ">Taken together, the BPO data thefts and frauds and the data leak scandals prefigure the contemporary discourse on data breaches in the second decade of the 21st century, or what may also be called the Decade of Datafication. The launch of the Indian biometric identity project, Aadhaar, in 2009, which linked access to public services and welfare delivery with biometric identification, resulted in large-scale data collection of the scheme’s subscribers. Such linking raised the spectre of state surveillance as alleged by the critics of Aadhaar, marking a watershed moment in the discourse on data privacy and protection.</p>
<h3 dir="ltr">Aadhaar Data Security and Other Data Breaches</h3>
<p dir="ltr" style="text-align: justify; ">Aadhaar was challenged in the Indian Supreme Court in 2012 when <a href="https://www.outlookindia.com/website/story/worries-about-the-aadhaar-monster/296790">it was made mandatory for welfare and other services such as banking, taxation and mobile telephony</a>. The national debate on the status of privacy as a cultural practice in Indian society and a fundamental right in the Indian Constitution led to two landmark judgments - the <a href="https://main.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_24-Aug-2017.pdf">2017 Puttaswamy ruling</a> holding privacy to be a constitutional right subject to limitations and <a href="https://indiankanoon.org/doc/127517806/">the 2018 Supreme Court judgment holding mandatory Aadhaar to be constitutional only for welfare and taxation but no other service</a>.</p>
<p dir="ltr" style="text-align: justify; ">While these judgments sought to rein in Aadhaar’s proliferating mandatory uses, biometric verification remained the most common mode of identity authentication with <a href="https://www.businesstoday.in/latest/trends/story/aadhaar-not-mandatory-yet-organisations-pose-it-as-a-mandatory-document-335550-2022-05-29">most organizations claiming it to be mandatory for various purposes</a>. During the same period from 2010 onwards, a range of data security events concerning Aadhaar came to light. These included <a href="https://www.firstpost.com/tech/news-analysis/aadhaar-security-breaches-here-are-the-major-untoward-incidents-that-have-happened-with-aadhaar-and-what-was-actually-affected-4300349.html">app-based flaws, government websites publishing Aadhaar details of subscribers, third party leaks of demographic data, duplicate and forged Aadhaar cards and other misuses</a>.</p>
<p dir="ltr" style="text-align: justify; ">In 2015, the Indian government launched its ambitious <a href="https://indiancc.mygov.in/wp-content/uploads/2021/08/mygov-10000000001596725005.pdf">Digital India Campaign to provide government services to Indian citizens</a> through online platforms. Yet, data security breach incidents continued to increase, particularly the trade in the sale and purchase of sensitive financial information related to bank accounts and credit card numbers. The online availability of <a href="https://www.livemint.com/Industry/l5WlBjdIDXWehaoKiuAP9J/India-unprepared-to-tackle-online-data-security-report.html">a rich trove of data, accessible via a simple Google search without the use of any extractive software or hacking skills </a>within a thriving shadow economy of data buyers and sellers makes India a particularly vulnerable digital economy, especially in the absence of robust legislation. The lack of awareness around digital crimes and low digital literacy further exacerbates the situation given that datafication via government portals, e-commerce, and online apps has outpaced the enforcement of legislative frameworks for data protection and cybersecurity.</p>
<p dir="ltr" style="text-align: justify; ">In the context of Aadhaar data security issues, the term “data leak” seems to have more traction in media stories followed by the term “security breach”. Given the complexity of the myriad ways in which Aadhaar data has been breached, terms such as <a href="https://techcrunch.com/2022/06/13/aadhaar-leak-pm-kisan/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADvQXtC19Gj80LSKVc5jLwnRsREalvM2f6dV3N9KmCs8be6_1Zbvu3J6abPmBxhLlUooLiOjg4JktYDDCXr0OYYvOZ5XFlXa6DfCJk97TvMXM-cs3uJbCJBA-ePqvAC5K4qGZSyDB4OykMEOIKXJpB0CTOourPRc5dBxFFq5JXlB">data leak and exposure</a> (of <a href="https://zeenews.india.com/personal-finance/aadhaar-data-breach-over-110-crore-indian-farmers-aadhaar-card-data-compromised-2473666.html">11 crore Indian farmers’ sensitive information</a>) add to the specificity of the data security compromise. The term “fraud” also makes a comeback in the context of <a href="https://www.business-standard.com/article/economy-policy/india-s-aadhaar-id-system-delivers-benefits-but-at-risk-of-widespread-fraud-122062400124_1.html">Aadhaar-related data security incidents</a>. These cases represent a mix of data frauds involving<a href="https://economictimes.indiatimes.com/news/india/alarm-over-fake-id-printing-websites-using-customer-data-for-cyber-fraud/articleshow/94742646.cms"> fake identities</a>, <a href="https://indianexpress.com/article/cities/delhi/in-new-age-data-theft-fraudsters-steal-thumb-prints-from-land-registries-7914530/">theft of thumb prints </a>for instance from land registries and inadvertent data leaks in numerous incidents involving <a href="https://techcrunch.com/2019/01/31/aadhaar-data-leak/">government employees in Jharkhand</a>, v<a href="https://www.firstpost.com/india/aadhaar-data-leak-details-of-7-82-cr-indians-from-ap-and-telangana-found-on-it-grids-database-6448961.html">oter ID information of Indian citizens in Andhra Pradesh and Telangana</a> and <a href="https://www.thehindu.com/sci-tech/technology/major-aadhaar-data-leak-plugged-french-security-researcher/article26584981.ece">activist reports of Indian government websites leaking Aadhaar data</a>.</p>
<p dir="ltr" style="text-align: justify; ">Aadhaar-related data security events parallel the increase in corporate data breaches during the decade of datafication. The term “data leak” again alternates with the term “data breach” in most media accounts while other terms such as “theft” and “scam” all but disappear in the media coverage of corporate data breaches.</p>
<p dir="ltr" style="text-align: justify; ">From 2016 onwards, incidents of corporate data breaches in India continued to rise. A massive <a href="https://thewire.in/banking/debit-card-breach-india-banking">debit card data breach involving the YES Bank ATMs and point-of-sale (PoS) machines </a>compromised through malware between May and July of 2016 resulted in the exposure of ATM PINs and non-personal identifiable information of customers. It went <a href="https://www.livemint.com/Industry/Ope7B0jpjoLkemwz6QXirN/SBI-Yes-Bank-MasterCard-deny-data-breach-of-own-systems.html">undetected for nearly three</a> months. Another data leak in 2018 concerned a <a href="https://www.zdnet.com/article/another-data-leak-hits-india-aadhaar-biometric-database/">system run by Indane, a state-owned utility company, which allowed anyone to download private information on all Aadhaar holders </a>including their names, services they were connected to and the unique 12-digit Aadhaar number. Data breaches continued to be reported in India concurrent with the incidents of data mismanagement related to Aadhaar. Some <a href="https://www.csoonline.com/article/3541148/the-biggest-data-breaches-in-india.html">prominent data breaches included </a>a cyberattack on the systems of airline data service provider SITA resulting in the leak of Air India passenger data, leakage of the personal details of the Common Admission Test (CAT) applicants, details of credit card and order preferences of Domino’s pizza customers on the dark web, leakage of COVID-19 patients’ test results leaked by government websites, user data of Justpay and Big Basket for sale on the dark web and an SBI data breach among others between 2019 and 2021.</p>
<p dir="ltr" style="text-align: justify; ">The media reportage of these data breaches use the term “cyberattack” to describe the activities of hackers and cybercriminals operating within a<a href="https://www.thehindu.com/sci-tech/technology/internet/most-damaging-cybercrime-services-are-cheap-on-the-dark-web/article37004587.ece"> shadow economy or the dark web</a>. Recent examples of cyberattacks by hackers who leak user data for sale on the dark web include <a href="https://indianexpress.com/article/technology/tech-news-technology/mobikwik-database-leaked-on-dark-web-company-denies-any-data-breach-7251448/">8.2 terabytes of 110 million sensitive financial data (KYC details, Aadhaar, credit/debit cards and phone numbers) of the payments app MobiKwik users</a>, <a href="https://www.firstpost.com/tech/news-analysis/dominos-india-data-breach-name-location-mobile-number-email-of-18-crore-orders-up-for-sale-on-dark-web-9650591.html">180 million Domino’s pizza orders (name, location, emails, mobile numbers),</a> and <a href="https://techcrunch.com/2022/07/18/cleartrip-data-breach-dark-web/">Flipkart’s Cleartrip users’ data</a>. In these incidents again, three terms appear prominently in the media reportage - cyberattack, data breach, and leak. The term “data breach” remains the most frequently used epithet in the media coverage of the lapses of data security. While it alternates with the term “leak” in the stories, the term “data breach” appears consistently across most headlines in the news stories.</p>
<p dir="ltr">The exposure of sensitive, personal, and non-personal data by public and private entities in India is certainly a cause for concern, given the ongoing data protection legislative vacuum.</p>
<p dir="ltr" style="text-align: justify; ">The media coverage of data breaches tends to emphasize the quantum of compromised user data aside from the types of data exposed. The media framing of these breaches in <a href="https://www.livemint.com/technology/tech-news/indian-firms-lost-176-million-to-data-breaches-last-fiscal-11658914231530.html">quantitative terms of financial loss</a> as well as the <a href="https://www.indiatoday.in/technology/news/story/personal-data-of-3-4-million-paytm-mall-users-reportedly-exposed-in-2020-data-breach-1980690-2022-07-27">magnitude</a> and the <a href="https://www.moneycontrol.com/news/business/banks/indian-banks-reported-248-data-breaches-in-last-four-years-says-government-8940891.html">number of breaches</a> certainly highlights the gravity of these incidents but harm to individual users is often not addressed.</p>
<h3 dir="ltr">Evolving Terminology and the Source of Data Harms</h3>
<p dir="ltr" style="text-align: justify; ">The main difference in the media reportage of the BPO cybersecurity incidents during the early aughts and the contemporary context of datafication is the usage of the term, “data breach”, which figures prominently in contemporary reportage of data security incidents but not so much in the BPO-related cybercrimes.</p>
<p dir="ltr" style="text-align: justify; ">THe BPO incidents of data theft and the attendant fraud must be understood in the context of the anxieties brought on by a globalizing world of Internet-enabled systems and transnational communications. In most of these incidents regarded as cybercrimes, the language of fraud and scam ventures further to attribute such illegal actions of the identifiable malefactors to cultural factors such as lack of ethics and professionalism.The usage of the term “data leak” in these media reports functions more specifically to underscore a broader lapse in data security as well as a lack of robust cybersecurity laws. The broader term, “breach”, is occasionally used to refer to these incidents but the term, “data breach” doesn’t appear as such.</p>
<p dir="ltr" style="text-align: justify; ">The term “data breach” gains more prominence in media accounts from 2009 onwards in the context of Aadhaar and the online delivery of goods and services by public and private players. The term “data breach” is often used interchangeably with the term “leak” within the broader ambit of cyberattacks in the corporate sector. The media reportage frames Aadhaar-related security lapses as instances of security/data breaches, data leaks, fraud, and occasionally scam.</p>
<p dir="ltr" style="text-align: justify; ">In contrast to the handful of data security cases in the BPO sector, data breaches have abounded in the second decade of the twenty-first century. What further differentiates the BPO-related incidents to the contemporary data breaches is the source of the data security lapse. Most corporate data breaches remain attributable to the actions of hackers and cybercriminals while the BPO security lapses were traceable back to ex-employees or insiders with access to sensitive data. We also see in the coverage of the BPO-related incidents, the attribution of such data security lapses to cultural factors including a lack of ethics and professionalism often in racial overtones. The media reportage of the BBC and ABC sting operations suggests that the India BPOs lack of preparedness to handle and maintain personal data confidentiality of foreigners point to the absence of a privacy culture in India. Interestingly, this transnational attribution recurs in a different form in the national debate on <a href="https://huffpost.netblogpro.com/archive/in/entry/indians-don-t-care-about-privacy-but-thankfully-the-law-will-teach-them-what-it-means_a_23179031">Aadhaar and how Indians don’t care about their privacy</a>.</p>
<p dir="ltr" style="text-align: justify; ">The question of the harms of data breaches to individuals is also an important one. In the discourse on contemporary data breaches, the actual material harm to an individual user is rarely ever established in the media reportage and generally framed as potential harm that could be devastating given the sensitivity of the compromised data. The harm is reported to be predominantly a function of organizational cybersecurity weakness or attributed to hackers and cybercriminals.</p>
<p dir="ltr" style="text-align: justify; ">The reporting of harm in collective terms of the number of accounts breached, financial costs of a data breach, the sheer number of breaches and the global rankings of countries with the highest reported cases certainly suggests a problem with cybersecurity and the lack of organizational preparedness. However, this collective framing of a data breach’s impact usually elides an individual user’s experience of harm. Even in the case of Aadhaar-related breaches - a mix of leaking data on government websites and other online portals and breaches - the notion of harm owing to exposed data isn’t clearly established. This is, however, different from the <a href="https://scroll.in/article/1013700/six-types-of-problems-aadhaar-is-causing-and-safeguards-needed-immediately">extensively documented cases of Aadhaar-related issues</a> in which welfare benefits have been denied, identities stolen and legitimate beneficiaries erased from the system due to technological errors.</p>
<h3 dir="ltr">Future Directions of Research</h3>
<p dir="ltr" style="text-align: justify; ">This brief, qualitative foray into the media coverage of data breaches over two decades has aimed to trace the usage of various terms in two different contexts - the Indian BPO-related incidents and the contemporary context of datafication. It would be worth exploring at length, the relationship between frequent reports of data breaches, and the language used to convey harm in the contemporary context of a concrete data protection legislation vacuum. It would be instructive to examine the specific uses of the terms such as “fraud”, “leak”, “scam”, “theft” and “breach” in media reporting of such data security incidents more exhaustively. Such analysis would elucidate how media reportage shapes public perception towards the safety of user data and an anticipation of attendant harm as data protection legislation continues to evolve.</p>
<p dir="ltr" style="text-align: justify; ">Especially with Aadhaar, which represents a paradigm shift in identity verification through digital means, it would be useful to conduct a sentiment analysis of how biometric identity related frauds, scams, and leaks are reported by the mainstream news media. A study of user attitudes and behaviours in response to the specific terminology of data security lapses such as the terms “breach”, “leak”, “fraud”, “scam”, “cybercrime”, and “cyberattack” would further contribute to how lay users understand the gravity of a data security lapse. Such research would go beyond expert understandings of data security incidents that tend to dominate media reportage to elucidate the concerns of lay users and further clarify the cultural meanings of data privacy.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/demistifying-data-breaches-in-india'>https://cis-india.org/internet-governance/blog/demistifying-data-breaches-in-india</a>
</p>
No publisherPawan SinghPrivacyInternet GovernanceData GovernanceData ProtectionData Management2022-10-17T16:14:03ZBlog EntryCIS comments on the Revised Non Personal Governance Framework Report
https://cis-india.org/internet-governance/blog/cis-comments-on-the-revised-non-personal-governance-framework-report
<b></b>
<p> </p>
<p>This submission presents a response by researchers at the Centre for Internet and Society,
India (CIS) to the second version of the Report on Non-Personal Data Governance Framework
prepared by the Committee of Experts (hereafter “Report”). CIS had also provided inputs to
1
the draft version of the Report published in July 2020.</p>
<p> </p>
<h3>Executive Summary</h3>
<p>It is beyond doubt that there must exist a regulatory frameowrk that governs the rights accorded to individual, businesses and the state in the context of the use of non personal data. However, based on the recommendations in the Report, we have found that the following areas require greater clarity and deliberation before being enacted. <br /><br /></p>
<h3>General Comments</h3>
<p><strong>1. Examining the economic considerations underpinning the non-personal data
governance framework</strong></p>
<p>a. Open Data access is not enough to offset network effects and existing power
imbalances in key digital sectors</p>
<p>b. Increased Data collection leads to Data Appropriation<br /><br /></p>
<p><strong>2. Addressing the societal concerns that arise with sharing Non Personal Data
sharing</strong></p>
<p>a. De-anonymization and harm linked with sharing Non Personal Data</p>
<p>b. ● Sharing non-personal data could result in a culture of data maximisation</p>
<h3>Section Specific Comments</h3>
<div><strong>1. Section 7.2-Non-Personal Data Roles- Community</strong></div>
<div>a. Vague and very wide definition of Community</div>
<div> </div>
<div><strong>2. Section 7.7- Data Trustee</strong></div>
<div>a. Need for greater clarity on the defining harmful activities and the appropriateness of Data Trustees </div>
<div> </div>
<div><strong>3. Section 7.4(iv)- ‘Duty of care’ of data custodian</strong></div>
<div>a. Lack of clarity on terms including active misuse and harm </div>
<div> </div>
<div><strong>4. Section 7.10 -Non-Personal Data Authority</strong></div>
<div>a.Composition of the Authority </div>
<div>b.Roles and Responsibility of the Authority</div>
<div> </div>
<div><strong>5. Section 9.3 - Copyright Law</strong></div>
<div>a. Failure to recognise copyright in underlying data of datasets</div>
<div>b. Consider advocating use of limitations and exceptions in copyright law to limit
ownership in datasets and underlying data</div>
<div> </div>
<p>The full version of the submission can be found at: <a href="http://www.cis-india.org/internet-governance/cis-comments-revised-npd-report">http://www.cis-india.org/internet-governance/cis-comments-revised-npd-report</a></p>
<div> </div>
<div> </div>
<div> </div>
<div> </div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cis-comments-on-the-revised-non-personal-governance-framework-report'>https://cis-india.org/internet-governance/blog/cis-comments-on-the-revised-non-personal-governance-framework-report</a>
</p>
No publisherPallavi Bedi, Anubha Sinha and Aman NairNon personal dataData Governance2021-03-22T05:39:45ZBlog EntryUnpacking Data Protection Law: A Visual Representation
https://cis-india.org/internet-governance/blog/unpacking-data-protection-law-a-visual-representation
<b>This visual explainer unpacking data protection law was developed by Amber Sinha (research) and Pooja Saxena (design), and published as part of the Data Privacy Week celebrations on the Privacy International blog. Join the conversation on Twitter using #dataprivacyweek.</b>
<p> </p>
<h4>Cross-posted from <a href="https://medium.com/@privacyint/unpacking-data-protection-300e51c5f9b5" target="_blank">Privacy International blog</a>.</h4>
<h4>Credits: Flag illustrations, when not created by the authors, are from <a href="http://www.freepik.com/" target="_blank">Ibrandify / Freepik</a>.</h4>
<hr />
<img src="https://github.com/cis-india/website/blob/master/img/AS-PS_UnpackingDataProtectionLaw_2018_01.png?raw=true" alt="Data protection law systems are usually seen as a dichotomy between the United State of America and the European Union" width="80%" />
<img src="https://github.com/cis-india/website/blob/master/img/AS-PS_UnpackingDataProtectionLaw_2018_02.png?raw=true" alt="This dichotomy is not an accurate representation of the issue. Today, close to a hundred countries follow the omnibus approach, while less than a dozen, including the US, use the sectoral approach." width="80%" />
<img src="https://github.com/cis-india/website/blob/master/img/AS-PS_UnpackingDataProtectionLaw_2018_03.gif?raw=true" alt="If too many laws apply to the same actor, compliance becomes difficult. As a result, the sectoral approach to data protection is becoming less relevant." width="80%" />
<img src="https://github.com/cis-india/website/blob/master/img/AS-PS_UnpackingDataProtectionLaw_2018_04.png?raw=true" alt="Data protection regulation involve interaction between regulators and industry." width="80%" />
<img src="https://github.com/cis-india/website/blob/master/img/AS-PS_UnpackingDataProtectionLaw_2018_05.gif?raw=true" alt="To be an effective data protection regulator, an entire range of regulatory tools are required, which the regulator can use to reward, support and sanction." width="80%" />
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/unpacking-data-protection-law-a-visual-representation'>https://cis-india.org/internet-governance/blog/unpacking-data-protection-law-a-visual-representation</a>
</p>
No publisheramberData GovernanceInternet GovernanceData ProtectionPrivacy2018-02-15T13:22:00ZBlog EntryMediaNama - #NAMAprivacy: The Future of User Data (Delhi, Sep 6)
https://cis-india.org/internet-governance/news/medianama-namaprivacy-the-future-of-user-data-delhi-sep-6
<b>MediaNama is hosting a full day conference on "the future of user data in India", on the 6th of September 2017, which is particularly significant given the recent Supreme Court ruling on the fundamental right to privacy, and two government consultations: one at the TRAI, and another at MEITY. This discussion is supported by Facebook, Google, and Microsoft. Sumandro Chattapadhyay, Research Director, will participate as a speaker in the session titled "regulating storage, sharing and transfer of data."</b>
<p> </p>
<h4>Details</h4>
<p>Time: September 6th 2017, 9 am to 4:30 pm</p>
<p>Venue: Gulmohar Hall, India Habitat Centre, Lodhi Road (please enter from Gate #3)</p>
<p>Agenda: <a href="https://www.medianama.com/2017/08/223-agenda-namaprivacy-future-of-user-data/">https://www.medianama.com/2017/08/223-agenda-namaprivacy-future-of-user-data/</a></p>
<h4>Announced Speakers</h4>
<ul><li>Chinmayi Arun, Centre for Communication Governance at NLU Delhi</li>
<li>Malavika Raghavan, IFMR Finance Foundation</li>
<li>Renuka Sane, NIPFP</li>
<li>Smitha Krishna Prasad, Centre for Communication Governance at NLU Delhi</li>
<li>Ananth Padmanabhan, Carnegie India</li>
<li>Avinash Ramachandra, Amazon</li>
<li>Hitesh Oberoi, Naukri</li>
<li>Jochai Ben-Avie, Mozilla</li>
<li>Mrinal Sinha, Mobikwik</li>
<li>Murari Sreedharan, Bankbazaar</li>
<li>Sumandro Chattapadhyay, Centre for Internet and Society</li></ul>
<h4>Facilitators</h4>
<ul><li>Saikat Datta, Asia Times Online</li>
<li>Shashidar KJ, MediaNama</li>
<li>Nikhil Pahwa, MediaNama</li></ul>
<h4>Attendees</h4>
<p>We have confirmed 140+ attendees from: Adobe, Amber Health, Amazon, APCO Worldwide, Bank Bazaar, Bloomberg-Quint, Blume Ventures, Broadband India Forum, Business Standard, BuzzFeed News, CCOAI, CEIP, Change Alliance, Chase India, CIS, CNN News18, DEF, Deloitte, DNA, DSCI, E2E Networks, British High Commission, Eurus Network Services, FICCI, Firefly Networks, Flipkart, Forrester Research, Fortumo, DoT, MEITY, IAMAI, IBM, ICRIER, IFMR Finance Foundation, IIMC, Indian Law Institute, Indic Project, Info Edge, ISPAI, IT for Change, ITU-APT, Jamia Millia Islamia, Jindal Global Law School, Mimir Technologies, Mozilla, Newslaundry, NIPFP, Nishith Desai Associates, NIXI, NLU-Delhi, ORF, Paytm, PLR Chambers, PRS Legislative Research, Publicis Groupe, Quartz India, Reliance Jio, Reuters, Saikrishna & Associates, Scroll.in, SFLC.in, Spectranet, The Economics Times, The Indian Express, The Times of India, The Wire, Times Internet, Twitter, and more.</p>
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/medianama-namaprivacy-the-future-of-user-data-delhi-sep-6'>https://cis-india.org/internet-governance/news/medianama-namaprivacy-the-future-of-user-data-delhi-sep-6</a>
</p>
No publishersumandroBig DataDigital EconomyPrivacyInternet GovernanceData GovernanceData ProtectionDigital Rights2017-09-05T10:22:12ZBlog Entry