Blog
It remains to be seen whether Google’s Privacy Sandbox project will be truly privacy-preserving. (Reuters Illustration: Francois Lenoir)
Interoperability and Portability as a Lever to Enhance User Choice and Privacy in Messaging Platforms
Since last year, digital platforms have been actively making the headlines in various countries for different acquisitions, raising questions around the anti-competitive nature of their behaviour. In the US, about 46 states filed an antitrust case against Facebook along with the Federal Trade Commission in December 2020, accusing them of buying out rivals such as WhatsApp, Instagram etc[1]. Recently, the US supreme court overturned the case by 46, stating it to be tardy and FTC’s case to be “legally insufficient”[2]. However, one of the solutions proposed for this problem by various experts and politicians is to break up Facebook[3].
Influential people such as Vijay Shekhar Sharma (CEO, Paytm) in India argued similarly when Whatsapp updated its privacy policy to share data with Facebook. They suggested that the movement of users towards Signal could break Facebook's monopoly[4]. While it is conceivable that breaking up a platform or seeking an alternative for them will bring an end to their monopoly, well, in reality, is it so? This post will try to answer this question. In section 1, I discuss the importance of interoperability and portability amongst the messaging platforms for tackling monopoly, which, in turn, helps in enhancing user outcomes such as user choice and privacy. Section 2 discusses the enablers, legislative reimagining, and structural changes required in terms of technology to enable interoperability and portability amongst the messaging platforms. In section 3, I discuss the cost structure and profitability of a proposed message gateway entity, followed by the conclusion.
1. Introduction
In the case of the platform economy, the formation of a monopoly is inevitable, especially in messaging platforms, because of (a) network effects and (b) lack of interoperability and portability between messaging platforms[5]. As the network effect gets vigorous, more users get locked into a single messaging platform leading toward a lack of user choice (in terms of switching platforms) and privacy concerns (as the messaging platforms get more significant, it poses a high risk in terms of data breaches, third-party data sharing etc.). For instance, as a WhatsApp user, it is difficult for me to switch towards any other messaging platforms as my friends, family and business/work still operate on WhatsApp. Messaging platforms also use the network effect towards their favour (a) by increasing the switching cost (b) by creating a high barrier to entry within the market[6].
If there was interoperability between the messaging platforms, I could choose between the platforms freely- thereby negating some of the aforementioned limitations. Therefore, to create a competitive environment amongst messaging platforms to enhance user choice and privacy, it is crucial to have an interoperability and portability framework. To deploy interoperability and portability, it is imperative to have coordination among platforms while still competing for individual market share[7]. Interoperability and portability will also bring in healthy competition, as platforms will be nudged to explore alternative value propositions to remain competitive in the market[8]. One of the outcomes of this could be better consumer protection through innovation of privacy safeguards, etc. In addition to this, interoperability and portability could enable a low barrier to entry (through breaking the network effect), which could, in turn, increase online messaging penetration in untapped geographies as more messaging platforms emerge in the market.
There are two kinds of interoperability, vertical interoperability – i.e., interoperability of services across complementary platforms and horizontal interoperability – i.e., interoperability of services between competing platforms. While vertical interoperability exists in the form of the cloud system, multiple system login, etc., horizontal interoperability is yet to experiment at the market level. Nonetheless, realising the competition concerns in the digital platforms’ market, the European Union (European Electronic Communications Code[9], Digital Service Act etc[10].), the US (Stigler Committee Report[11]) and the UK Competition and Markets Authority[12] are mulling a move towards interoperability amongst the digital platforms. Furthermore, Facebook has already commissioned its efforts towards horizontal interoperability[13] amongst its messaging platforms, i.e., Messenger, WhatsApp and Instagram direct messages. This again adds to the competition concerns, as one platform uses interoperability towards its favour.
Besides, one of the bottlenecks towards enabling horizontal interoperability is the lack of technical interoperability – i.e., the ability to accept or transfer data, perform a task etc., across platforms. In the case of messaging platforms, lack of technical interoperability is caused due to the presence of different kinds of messaging platforms operating with different technical procedures. Therefore, to have effective horizontal interoperability and portability, it is crucial to streamline technical procedures and have guidelines which will enable technical interoperability. In the following section, I discuss the enablers, legislative reimagining, and structural changes required in terms of technology to enable interoperability and portability amongst the messaging platforms.
2. Message Gateway Entity
2.1. Formation of Message Gateway Entity to Enable Interoperability
To drive efficacious interoperability, it is imperative to form message gateway entities as for-profits that are regulated by a regulator (either an existing one such as TRAI or a newly established one). The three key functions of message gateway entities should be: (a) Maintain standard format for messaging prescribed by a standard-setting council, (b) Provide responsive user message delivery system to messaging platforms, (c) Deliver messages from one messaging platform to another seamlessly in real-time. There have to be multiple message gateway entities to enable competition, which will bring out more innovations, penetration, and effectiveness. Besides, it is prudent to have private players as message gateway entities as government-led message gateway entities for interoperability will not be fruitful as there will be a question of efficacy. Also, this might, in a way, bring the tender style business, which is problematic as the government could have a say in how and who it will provide its service (gatekeeping). However, the government has to set it up by itself only if it is a public good (missing markets) which might not be the case in message gateway entities.
Messaging platforms should be mandated through legislation/executive order to be a member of at least one of the message gateway entities to provide interoperability benefits to its users. Simultaneously, messaging platforms can also handle internal message delivery - User A to User B within the platform - amongst themselves.
While message gateway entities will enable interoperability between messaging platforms, it is crucial to have interoperability among themselves to compete in the market. For instance, a user from messaging platform under gateway A should be able to send messages to a user of a messaging platform under gateway B. Perhaps as we enable competition amongst the message gateways entities, the enrollment price will also become commensurate and affordable for small and new messaging platforms. In addition to this, to increase interoperability, message gateway entities should develop various awareness programs at the user level.
Further, the regulatory guidelines for message gateway entities (governed by the regulator) must be uniform, with leeway for gateways to innovate technology to attract messaging platforms. Borrowing some of the facets from the various existing legislations, the below suggested aspects should advise the uniform guidelines,
-
End-to-end encryption: As part of the uniform guidelines, message gateway entities should be mandated to enable end-end encryption for message delivery. In contrast, the recent Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021[14] tries to break the end-end encryption by mandating significant social media intermediaries to identify the first originator of a particular message (part II section 4 rule 2) sought through an order. As this mandate impinges upon user privacy and free speech, the Indian government should revise this rule to keep end-to-end encryption intact. Besides, WhatsApp (a significant social media intermediary) has moved to Delhi High Court to block the implementation of the rules, which came into force on May 27th, 2021[15]. Also, Rule 4(2) of IT Rules 2021 contradicts provisions of the PDP Bill 2019 such as privacy by design[16] (Section 22) and the right to be forgotten (Section 20).
-
Neutrality: The guidelines should have a strict rule for enforcing non-discrimination (similar to the Indian Government's 2018 net neutrality principles[17]) in delivering messages by message gateway entities. Discrimination against both messaging platforms and other message gateway entities has to be scrutinised. In addition to that, to hold message gateway entities accountable, the guidelines should mandate monthly disclosure of information (at the messaging platform level with information on which gateway entity they are routed through) on message deliveries and failures in a prescribed user-friendly format to the public.
-
Standard Format Setting: As various messaging platforms follow different formats for providing messaging services, to have seamless interoperability, message gateway entities must adhere to a standard format, which is compatible with formats followed within the market. This standard format has to keep up with technological evolution in this space and to be formulated by an independent standard-setting council (through stakeholder consultation) commissioned by the regulator. The maintenance of this standard format falls into the ambit of message gateway entities and should be governed by the regulator.
-
Uniform identification information: As the users of messaging platforms identify other users through various means, for instance, on WhatsApp, we use the telephone number, whereas, on Instagram, we use profile name; thus, the unique identification information (UII) of a user (which can be something existing like a phone number or a new dedicated identification number) has to be standardised. Message gateway entities should facilitate messaging platforms with this process, and the generation of UII should be seamless for the user. Besides, a user's unique identification information has to be an additional way to search for other users within a messaging platform and would be crucial for messaging across platforms.
-
Consumer choice: While interoperability should be a default option for all the users, there has to be a user-friendly way of opt-out for the user who wishes to compartmentalise different kinds of messages depending upon the platform used. The unique identification information (in case of a new dedicated number) of a user who had opted out must be ceased to avoid misuse.One of the major reasons users opt-out of interoperability services could be to keep various digital public spheres (personal, leisure, professional, etc.) distant. To tackle this dilemma of the users, the messaging platforms should enable options such as (a) the optional notification for cross-platform messages with the snooze option, so that the user can decide if she wants the cross-platform message to hit the enrolled messaging platform at the given time. (b) The messaging platform should enable the “opt-out from messaging platform” setting for the users to disable messages from a list of platforms. Besides, users might choose to opt-out due to lack of trust. This has to be tackled by both the message gateway entities by creating awareness amongst the users on their rights and messaging platforms by providing a user-friendly privacy policy.
-
Data Protection: As the emergence of message gateway entities creates new data flow, this new flow of data has to take a data minimisation approach. Message gateway entities should be recognised as the data processor (one who processes data for data fiduciary, i.e., messaging platforms). They should adhere to the upcoming Personal Data Protection regime[18] to protect the data principals' personal data and collect personal data as per the proportionality principle. Message gateway entities should not collect any non-personal data or process any form of data to infer the behavioural traits of the data principals or messaging platforms. In addition to this, the name of the message gateway entity enrolled by the messaging platform, data collected and processed by the message gateway entity should be disclosed to the data principals through the messaging platform’s privacy policy.
-
Licensing: There should be a certain level of restriction on licensing to create a level playing field. Applicants for message gateway entities should not have an economic interest in any messaging platforms or social media intermediaries. Applicants have to ensure that the delivery failure of the messages should be at the level of 2% to 1%. Besides, to ensure low levels of delivery failure, data protection compliance and to check other requirements, message gateway entities have to go through technical and regulatory sandbox testing before issuing a license.
-
Consumer Protection: Users should be given a choice to block another user (using unique identification information) for various reasons such as personal, non-personal, phishing etc. After a stipulated number of blocking by multiple users, the suspected user should be denied access (temporarily or permanently according to the reasons) to message gateway entities. Before denying access, the message gateway entities should indicate the messaging platforms to notify the user. There has to be a robust grievance redressal mechanism for users and messaging platforms to raise their complaints regarding blocking, data protection, phishing etc. Besides, unique identification information has to be leveraged to prevent bot accounts and imposters. In addition to this, message gateway entities should be compatible with measures taken by messaging platforms to prevent the spread of disinformation and misinformation (such as restrictions on the number of recipients for forward messages).
The figure below showcases the use case of the message exchange with the introduction of message gateway entities.
Source: Author’s own illustration of the process of interoperability
2.2. Portability Feature to Compliment Interoperability
In the case of messaging platforms, when we talk about portability, it is essential to differentiate it into two: (a) portability of the unique identification information of the user from one platform to other seamlessly (b) portability of the user data from one platform to other followed by the portability of unique identification information. As the generation of unique identification information is facilitated by the message gateway entities, the portability of the same has to be done by the respective messaging gateway entity. Adopting some features of process and protocols from Mobile Number Portability[19] mandated by the Telecom Regulatory Authority of India, standard-setting council for messaging gateway entities (discussed above) should streamline the unique identification information portability process across messaging gateway entities.
Followed by the unique identification information porting, the message gateway entities should trigger a notification to the messaging platform (on behalf of the user) to transfer user data towards the requested platform. As mentioned in chapter V, section 19(1)(b) of The Personal Data Protection Bill, 2019, messaging platforms should transfer the user data towards the platform notified by the message gateway entity in the suggested or compatible format.
Globally since the emergence of the General Data Protection Regulation (GDPR) and other legislation that mandates data portability, platforms have launched the Data Transfer Project (DTP)[20] in 2018 to create a uniform format to port data. There are three components to the DTP, of which two are crucial, i.e., Data models and Company Specific Adapter. A Data Model is a set of common formats established through legislation to enable portability; in the case of messaging platforms, the standard-setting council can come up with the Data Model.
Under Company Specific Adapter, there are Data Adapters and Authentication Adapters. The Data Adapter converts the exporter platform’s data format into the Data Model and then into the importer platform’s data format. The Authentication Adapter enables users to provide consent for the data transfer. While Company Specific Adapters under DTP are broadly for digital platforms, adopting the same framework, message gateway entities can act as both a Data Adapter and as an Authentication Adapter to enable user data portability amongst the messaging platforms. Message gateway entities can help enrolled messaging platforms in format conversion for data portability and support users' authentication process using the unique identification information. Besides, as messaging gateway entities are already uniform and interoperable, cross transfer across message gateway entities can also be made possible.
3. Profitability of Message Gateway Entities
As the message gateway entities would operate as for-profits, they may cost the messaging platform one-time enrolment fees for membership through which the member (messaging platform) can avail interoperability and portability services. The enrolment fees should be a capital cost that compensates the messaging gateway entities for enabling technical interoperability. In addition to this, message gateway entities may levy minimal yearly fees to maintain the system, customer (messaging platforms) service and grievances portal (for both users and messaging platforms). Besides, in terms of update (as per new standards) or upgradation of the system, message gateway entities may charge an additional fee to the member messaging platforms.
On the other hand, messaging platforms don’t charge[21] a monetary fee for the service because the marginal cost of providing the service is near zero, while they incur only fixed cost. Besides, nothing is free in the platform economy as we pay the messaging platforms in the form of our personal and non-personal (behavioural) data, which they sell to advertisers[22].
Therefore, messaging platforms have to consider the fee paid to the message gateway entities as part of their fixed cost such that they continue not to charge (monetary) users for the service as the cost-per-user would still be very low. Besides, messaging platforms also have economic incentives in providing interoperability as it could reduce multi-homing (i.e., when some users join or use multiple platforms simultaneously).
4. Conclusion
While breaking up Facebook and other bigger social media or messaging platforms could bring a level playing field, this process could consume a large portion of resources and time. Irrespective of a breakup, in the absence of interoperability and portability, the network effect will favour few platforms due to high switching cost, which leads to a high entry barrier.
When we text users using Short Message Service (SMS), we don't think about which carrier the recipient uses. Likewise, messaging across messaging platforms should be platform-neutral by adopting interoperability and portability features. Besides, interoperability and portability will also bring healthy competition, which would act as a lever to enhance user choice and privacy.
This also opens up questions for future research on the demand-side. We need to explore the causal effect of interoperability and portability on users to understand whether they will switch platforms when provided with port and interoperate options.
This article has been edited by Arindrajit Basu, Pallavi Bedi, Vipul Kharbanda and Aman Nair.
The author is a tech policy enthusiast. He is currently pursuing PGP in Public Policy from the Takshashila Institution. Views are personal and do not represent any organisations. The author can be reached at [email protected]
Footnotes
[1] Rodrigo, C. M., & Klar, R. (2020). 46 states and FTC file antitrust lawsuits against Facebook. Retrieved from The Hill: https://thehill.com/policy/technology/529504-state-ags-ftc-sue-facebook-alleging-anti-competitive-practices
[2] Is Facebook a monopolist? (2021). Retrieved from The Economist:https://www.economist.com/business/2021/07/03/is-facebook-a-monopolist
[3] Hughes, C. (2019). It’s Time to Break Up Facebook. Retrieved from The New York Times: https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html
[4] Shekar, K. (2021). An Elephant in the Room – Recent Case of WhatsApp Fallout Amongst Indian Users. Retrieved from Takshashila Institution: https://takshashila.org.in/an-elephant-in-the-room-recent-case-of-whatsapp-fallout-amongst-indian-users/
[5] Manur, A. (2018). How to Regulate Internet Platforms Without Breaking them . Retrieved from AsiaGlobal Online: https://www.asiaglobalonline.hku.hk/regulate-internet-platforms-antitrust-competition/
[6] Ibid
[7] Nègre, A. (2021). How Can Funders Promote Interoperable Payments? Retrieved from CGAP Blog: https://www.cgap.org/blog/how-can-funders-promote-interoperable-payments;
Cook, W. (2017). Rules of the Road: Interoperability and Governance. Retrieved from CGAP Blog: https://www.cgap.org/blog/rules-road-interoperability-and-governance
[8] Punjabi, A., & Ojha, S. (n.d.). PPI Interoperability: A roadmap to seamless payments infrastructure. Retrieved from PWC: https://www.pwc.in/consulting/financial-services/fintech/payments/ppi-interoperability.html
[9] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) . (n.d.). Retrieved from European Union: https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1608117147218&uri=COM%3A2020%3A825%3AFIN
[10] European Electronic Communications Code (EECC). (n.d.). Retrieved from https://www.gov.ie/en/publication/339a9-european-electronic-communications-code-eecc/
[11] Stigler Center News Stigler Committee on Digital Platforms: Final Report. (n.d.). Retrieved from Chicago Booth: https://www.chicagobooth.edu/research/stigler/news-and-media/committee-on-digital-platforms-final-report
[12] Brown, I. (n.d.). Interoperability as a tool for competition regulation. CyberBRICS.
[13] Facebook is hard at work to merge its family of messaging apps: Zuckerberg. (2020). Retrieved from Business Standard: https://www.business-standard.com/article/companies/facebook-is-hard-at-work-to-merge-its-family-of-messaging-apps-zuckerberg-120103000470_1.html
[14]Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. (n.d.). Retrieved from: https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf
[15] Menn, Joseph. 2021. "WhatsApp sues Indian government over new privacy rules - sources." Reuters. Retrieved from: https://www.reuters.com/world/india/exclusive-whatsapp-sues-india-govt-says-new-media-rules-mean-end-privacy-sources-2021-05-26/
[16] Raghavan, M. (2021). India’s New Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation. Retrieved from Future of Privacy Forum:https://fpf.org/blog/indias-new-intermediary-digital-media-rules-expanding-the-boundaries-of-executive-power-in-digital-regulation/
[17]Net Neutrality. (n.d.). Retrieved from Department of Telecommunications: https://dot.gov.in/net-neutrality;
Parsheera, S. (n.d.). Net Neutrality In India: From Rules To Enforcement. Retrieved from Medianama: https://www.medianama.com/2020/05/223-net-neutrality-india-rules-enforcement/
[18]The Personal Data Protection Bill, 2019. (n.d.). Retrieved from: http://164.100.47.4/BillsTexts/LSBillTexts/Asintroduced/373_2019_LS_Eng.pdf
[19] Consultation Paper on Review of Interconnection Usage Charges, 2019. TRAI.
Mobile Number Portability. (n.d.). Retrieved from TRAI: https://www.trai.gov.in/faqcategory/mobile-number-portability
[20] Data Transfer Project. (2018). Retrieved from https://datatransferproject.dev
[21] Aulakh, G. (n.d.). How messaging apps like WhatsApp, WeChat can make money while offering free texting and calling. Retrieved from Economic Times: https://economictimes.indiatimes.com/tech/software/how-messaging-apps-like-whatsapp-wechat-can-make-money-while-offering-free-texting-and-calling/articleshow/62666227.cms
[22] (2019). Report of the Competition Law Review Committee. Ministry of Corporate Affairs.
Bibliography
- Master Direction on Issuance and Operation of Prepaid Payment Instruments. (n.d.). Retrieved from Reserve Bank of India: https://www.rbi.org.in/Scripts/BS_ViewMasDirections.aspx?id=11142
- Privacy Without Monopoly: Data Protection and Interoperability. (2021). Retrieved from Electronic Frontier Foundation: https://www.eff.org/wp/interoperability-and-privacy
- Sullivan, M. (2021). How interoperability could end Facebook’s death grip on social media. Retrieved from Fast Company: https://www.fastcompany.com/90609208/social-networking-interoperability-facebook-antitrust
- Tinworth, A. (n.d.). Why Messenger Interoperability is a digital canary in the coal mine. Retrieved from NEXT: https://nextconf.eu/2019/06/why-messenger-interoperability-is-a-digital-canary-in-the-coal-mine/#gref
The Ministry And The Trace: Subverting End-To-End Encryption
The paper was published in the NUJS Law Review Volume 14 Issue 2 (2021).
Abstract
End-to-end encrypted messaging allows individuals to hold confidential conversations free from the interference of states and private corporations. To aid surveillance and prosecution of crimes, the Indian Government has mandated online messaging providers to enable identification of originators of messages that traverse their platforms. This paper establishes how the different ways in which this ‘traceability’ mandate can be implemented (dropping end-to-end encryption, hashing messages, and attaching originator information to messages) come with serious costs to usability, security and privacy. Through a legal and constitutional analysis, we contend that traceability exceeds the scope of delegated legislation under the Information Technology Act, and is at odds with the fundamental right to privacy.
Click here to read the full paper.
Media Market Risk Ratings: India
Introduction
The harms of disinformation are proliferating around the globe—threatening our elections, our health, and our shared sense of facts.
The infodemic laid bare by COVID-19 conspiracy theories clearly shows that disinformation costs peoples’ lives. Websites masquerading as news outlets are driving and profiting financially from the situation.
The goal of the Global Disinformation Index (GDI) is to cut off the revenue streams that incentivise and sustain the spread of disinformation. Using both artificial and human intelligence, the GDI has created an assessment framework to rate the disinformation risk of news domains.
The GDI risk rating provides advertisers, ad tech companies and platforms with greater information about a range of disinformation flags related to a site’s content (i.e. reliability of content), operations (i.e. operational and editorial integrity) and context (i.e. perceptions of brand trust). The findings in this report are based on the human review of these three pillars: Content, Operations, and Context.
A site’s disinformation risk level is based on that site’s aggregated score across all of the reviewed pillars and indicators. A site’s overall score ranges from zero (maximum risk level) to 100 (minimum risk level). Each indicator that is included in the framework is scored from zero to 100. The output of the index is therefore the site’s overall disinformation risk level, rather than the truthfulness or journalistic quality of the site.
Key Findings
In reviewing the media landscape for India, the assessment found that:
Nearly a third of the sites in our sample had a high risk of disinforming their online users.
- Eighteen sites were found to have a high disinformation risk rating. This group includes sites that are published in all the three languages in our scope: English, Hindi and Bengali.
- Around half of the websites in our sample had a ‘medium’ risk rating. No site performed exceptionally on all fronts, resulting in no sites having a minimum risk rating. On the other hand, no site performed so poorly as to earn a maximum risk rating.
Only a limited number of Indian sites present low levels of disinformation risks.
- No website was rated as having a ‘minimum’ disinformation risk.
- Eight sites were rated with a ‘low’ level of disinformation risk. Seven out of these websites served content primarily in English, one in Hindi.
The media sites assessed in India tend to perform very poorly on publishing transparent operational checks and balances.
- Over one-third of the sites in our sample published little information about their ownership structure, and also failed to be transparent about their revenue sources.
- Only ten of the sites in our sample publish any information about their policies on how they correct errors in their reporting.
Association with traditional media did not play a significant factor in determining risk of disinformation.
- On average, websites associated with TV or print did not perform any differently when compared to websites that solely serve digital content.
The findings show that on the whole, Indian websites can substantially increase their trustworthiness by taking measures to address these shortfalls in their operational checks and balances. For example, they could increase transparency on the structure of their businesses and have clear policies on how they address errors in their reporting. Both of these measures are in line with universal standards of good journalistic practices, as agreed by the Journalism Trust Initiative.
Click to download the full report here. To read the report in Hindi, click here. The authors extend their thanks to Anna Liz Thomas, Sanah Javed, Sagnik Chatterjee, and Raghav Ahooja for their assistance.
Health IDs: Voluntary or Mandatory?
In January 2021, the Health Ministry officially allowed Aadhaar-based authentication when creating a UHID for identification and authentication of beneficiaries for various health IT applications promoted by the Ministry. This enabled the Co-Win portal, which is used to book COVID-19 vaccination appointments, to accept Aadhaar for authentication. As per Clause 2a of Co-Win’s privacy policy, “If you choose to use Aadhaar for vaccination, you may also choose to get a Unique Health ID (UHID) created for yourself.” The privacy policy stresses the voluntary nature of this process by stating that “This feature is purely optional.”
However, multiple media reports have mentioned that beneficiaries who have enrolled in the COVID-19 vaccination programme using their Aadhar number have had their UHIDs created without either obtaining their specific consent or being given the option to opt out. This is concerning as this done has been done based on the data entered by citizens and is linked to their Aadhaar, despite clarifications from the Government that Aadhaar is not mandatory for getting a UHID. It is also pertinent to note that the Co-Win website did not have a privacy policy until it was directed to publish one by the Delhi High Court on 2 June 2021 — almost three months after registration on Co-Win was made mandatory.
As per the NDHM, UHIDs have been rolled out on a pilot basis in the six union territories of India. They will be rolled out across the country in subsequent phases. However, as per newspaper reports, several people who had registered for the COVID-19 vaccine on the Co-Win website using their Aadhaar numbers received a UHID number on their COVID-19 vaccine certificates. This is not limited to the six union territories – UHID numbers have been generated for beneficiaries who had registered using their Aadhaar numbers across the country, without citizens having any choice in opting into the project. It appears that the UHID pilot project has been silently expanded across the country without any official announcement being made in this regard.
As per the Health Data Policy, UHIDs are to be generated on a voluntary basis after obtaining the consent of the beneficiary. However, at the time of registering on the Co-Win portal or at vaccination centers, no separate forms were shared with the beneficiaries to obtain their consent to generate UHIDs. This is contrary to the provisions of the Health Data Policy, which clearly states that the consent of the user must be obtained for the processing of personal data. Clause 9.2of the Health Data Policy states that consent of the “data principal will be considered valid only if it is (c) specific, where the data principal can give consent for the processing of personal data for a particular purpose; (d) clearly given; and (e) capable of being withdrawn.” The beneficiaries are also not informed of their right to de-activate the UHID and reactivate it later if required, Clause 15.8 of the Health Data Policy.
Interestingly, if a person in any of the six union territories tries to self-register for a UHID, they are directed to a page seeking their consent. The consent form states,
“I understand that my Health ID can be used and shared for purposes as may be notified by NDHM from time to time including provision of healthcare services. Further, I am aware that my personal identifiable information (Name, Address, Age, Date of Birth, Gender and Photograph) may be made available to the entities working in the National Digital Health Ecosystem (NDHE) … I am aware that my personal identifiable information can be used and shared for purposes as mentioned above. I reserve the right to revoke the given consent at any point of time.”
However, this information/consent form is not shared with beneficiaries who receive UHIDs when they register on Co-Win using their Aadhaar number. As per newspaper reports, several of these people are also completely unaware of the purposes of an UHID.
Absence of a data protection law and governance structure contemplated under the Health Data Policy
The entire digital health ecosystem is currently operating in the absence of any data protection law and the governance structure proposed under the Health Data Policy.
The Supreme Court of India, in Justice K. S. Puttaswamy (Retd) Vs Union of India, held that confidentiality and privacy of medical data is a fundamental right under Article 21 of the Constitution. Any action that negates the fundamental right to privacy will need to satisfy three conditions, namely (i) existence of a law; (ii) legitimate state aim; and (iii) proportionality
The first is that the action should be permissible under a law passed by the Parliament. This was also recognised by the Supreme Court in 2018 in the Aadhaar judgement, the court, while deciding on the validity of Aadhar, noted that “A valid law in this case would mean a law passed by Parliament, which is just, fair and reasonable. Any encroachment upon the fundamental right cannot be sustained by an executive notification.”
The Health Data Policy fails this condition as it is a policy and not a law and a policy is not a substitute for a law, For collection of personal data, it is imperative that a data protection law should be enacted at the earliest. Alternatively, or in addition, a comprehensive separate legislation should be enacted to regulate the digital health ecosystem.
It is also pertinent to note the Health Data Policy provides for the creation of a data protection officer as well as grievance redressal officer. Neither of these entities have been instituted so far. In other words, UHIDs are being issued without the governance structure prescribed by the Health Data Policy being in place.
Conclusion
The need for strong data protection legislation to protect users’ health data has been recognised across different jurisdictions and has also been emphasised by various international organisations. In 2006, the World Health Organization recommended that governments enact a robust data protection legislation before digitising the health sector.
The health identity project has been launched and UHIDs are being issued as part of the COVID-19 vaccination process in different parts of India without the initial steps such as enacting data protection legislation and creating a robust digital ecosystem either not been concluded or the process not yet been undertaken. Hasty implementation without adequate safeguards and preparation not only risks the privacy and security of medical
data, it may also undermine general trust in the system leading to low uptake.
CIS Seminar Series: Information Disorder
The CIS seminar series will be a venue for researchers to share works-in-progress, exchange ideas, identify avenues for collaboration, and curate research. We also seek to mitigate the impact of Covid-19 on research exchange, and foster collaborations among researchers and academics from diverse geographies. Every quarter we will be hosting a remote seminar with presentations, discussions and debate on a thematic area.
Seminar format
We are happy to welcome abstracts for one of two tracks:
Working paper presentation
A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.
Coffee-shop conversations
In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.
We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.
All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.
Please send all abstracts to [email protected].
Theme for the first seminar (to be held on an online platform)
The first seminar will be centered around the theme of ‘Information Disorder: Mis-, Dis- and Malinformation.’ While the issue of information disorder, colloquially termed as ‘fake news’, has been in the political forefront for the last five years, the flawed attempts at countering the ‘infodemic’ brought about by the pandemic proves that there still continues to be substantial gaps in the body-of-knowledge on this issue. This includes research that proposes empirical, replicable methods of understanding the types, forms or nature of information disorder or research that attempts to understand regulatory approaches, the layers of production and the roles played by different agents in the spread of ‘fake news’.
Accordingly, we invite submissions that address these gaps in knowledge, including those that examine the relationship between digital technology and information disorder across a spectrum of fields and disciplines. Areas of interest include but are not limited to:
- Information disorders during COVID-19
- Effects of coordinated campaigns on marginalised communities
- Journalism, the State, and the trust in media
- Platform responsibility in information disorder
- Information disorder in international law/constitutional/human rights law
- Information disorder as a geopolitical tool
- Sociopolitical and cultural factors in user engagement
Timeline
- Abstract Submission Deadline: August 25th
- Results of Abstract review: September 8th
- Full submissions (of draft papers): September 30th
- Seminar date: Tentatively October 7th
Contact details
For any queries please contact us at [email protected].
Comments on proposed amendments to the Consumer Protection (E-Commerce) Rules, 2020
The Consumer Protection (E-commerce) Rules, 2020 were first introduced in an attempt to ensure that consumers were granted adequate protections and to prevent the adoption of unfair trade practices by E-commerce entities. The amendments have proposed several rules which will protect the consumer with a restriction on misleading advertisements and appointment of grievance officers based in India. However, while on this path, the proposed rules have created hurdles in the operations of e-commerce, reducing the ease of business and increasing the costs of operations especially for smaller players; which could eventually pass on to the consumers.
In our submission to the Ministry of Consumer Affairs, we focussed our analysis on eight points: Definitions and Registration, Compliance, Data Protection and Surveillance, Flash Sales, Unfair Trade Practices, Jurisdictional Issues with Competition Law, Compliance with International Trade Law and Liabilities of Marketplace E-commerce Entities.
A snapshot of our recommendations and analysis is listed out below. To read our full submission, please click here.
Definitions and Registrations
The registration of entities with the DPIIT must be made as smooth as possible especially considering the wide definition of E-commerce entities in the rules, which may include smaller businesses as well. In particular, we suggested doing away with physical office visits.
Compliance
As a general observation, compliance obligations should be differentiated based on the size of the entity and the volume of transactions rather than adopting a ‘one size fits all’ approach which may harm smaller businesses, especially those that are just starting up. Before these rules come into force, further consultations with small and medium-sized business enterprises would be vital in ensuring that the regulation is in line with their needs and does not hamper their growth. Excessive compliance requirements may end up playing into the hands of the largest players as they would have larger financial coffers and institutional mechanisms to comply with these obligations.
There is some confusion in the law as to whether the Chief Compliance officer mentioned in the amended rules is the same as the “nodal person of contact or an alternate senior designated functionary who is resident of India” under Rule 5(1).
The safe harbour should therefore refer to due diligence by the CCO and not the e-commerce entity itself. The requirement for the compliance officer to be an Indian citizen who is a resident and a senior officer or managerial employee may place an undue burden on small E-commerce players not located in India.
Data Protection and Surveillance
In the absence of a Personal Data protection bill these rules do not adequately protect consumers’ personal data and reduce the powers given to the Central Government to access data or conduct surveillance
Flash Sales
Conventional flash sales should be defined. Clear distinction must be made between conventional flash sales and fraudulent flash sales. The definition should not be limited to interception of business “using technological means”, which limits the scope of the fraudulent flash sales. Further parameters must be provided for when a flash sale will be considered a fraudulent flash sale.
Unfair Trade Practices
The rules place restrictions on marketplace E-commerce entities from selling their own goods or services or from listing related enterprises as sellers on their platforms. No such restriction applies to brick and mortar stores, and this blanket ban must be rethought.
Jurisdictional Issues with Competition Law
This rule brings the issue of ‘abuse of dominant power’ under the fora of the Consumer Protection Authority or the Consumer Disputes Redressal Commissions. Overlapping jurisdiction of this nature could introduce regulatory delays into the dispute resolution process and can be a source of tension for the parties and regulatory authorities. The intention behind importing a competition law concept such as “abuse of dominant position” in the consumer protection regulations may be understandable, such a step might be effective in jurisdictions which have a common regulatory authority for both competition law as well as consumer protection issues, such as Australia, Finland, Ireland, Netherlands. However, in a country such as India which has completely separate regulatory mechanisms for competition and consumer law issues, such a provision may lead to logistical difficulties.
Compliance with International Trade Law
A robust framework on ranking with transparent disclosure of parameters for the same would also go a long way towards addressing concerns with discrimination and national treatment under WTO law. Further, the obligation to provide domestic alternatives should be clarified and amended to ensure that it does not cause uncertainty and open India up to a national treatment challenge at the WTO.
Liabilities of Marketplace E-commerce Entities
Fallback liability is an essential component of consumers’ protection in the E-commerce space. However, as currently envisioned there is a lack of clarity surrounding the extent to which fallback liability is applicable on E-commerce entities as well as exemptions to this liability. We have recommended alternate approaches adopted in other jurisdictions, which include
-
Liability through negligence
-
Liability as an exemption to safe harbour
Do We Really Need an App for That? Examining the Utility and Privacy Implications of India’s Digital Vaccine Certificates
This blogpost was edited by Gurshabad Grover, Yesha Tshering Paul, and Amber Sinha.
It was originally published on Digital Identities: Design and Uses and is cross-posted here.
In an experiment to streamline its COVID-19 immunisation drive, India has adopted a centralised vaccine administration system called CoWIN (or COVID Vaccine Intelligence Network). In addition to facilitating registration for both online and walk-in vaccine appointments, the system also allows for the digital verification of vaccine certificates, which it issues to people who have received a dose. This development aligns with a global trend, as many countries have adopted or are in the process of adopting “vaccine passports” to facilitate safe movement of people while resuming commercial activity.
Some places, such as the EU, have constrained the scope of use of their vaccine certificates to international travel. The Indian government, however, has so far skirted important questions around where and when this technology should be used. By allowing anyone to use the online CoWIN portal to scan and verify certificates, and even providing a way for the private-sector to incorporate this functionality into their applications, the government has opened up the possibility of these digital certificates being used, and even mandated, for domestic everyday use such as going to a grocery shop, a crowded venue, or a workplace.
In this blog post, we examine the purported benefits of digital vaccine certificates over regular paper-based ones, analyse the privacy implications of their use, and present recommendations to make them more privacy respecting. We hope that such an analysis can help inform policy on appropriate use of this technology and improve its privacy properties in cases where its use is warranted.
We also note that while this post only examines the merits of a technological solution put out by the government, it is more important to consider the effects that placing restrictions on the movement of unvaccinated people has on their civil liberties in the face of a vaccine rollout that is inequitable along many lines, including gender, caste-class, and access to technology.
How do digital vaccine certificates work?
Every vaccine recipient in the country is required to be registered on the CoWIN platform using one of seven existing identity documents. [1] Once a vaccine is administered, CoWIN generates a vaccine certificate which the recipient can access on the CoWIN website. The certificate is a single page document that contains the recipient’s personal information — their name, age, gender, identity document details, unique health ID, a reference ID — and some details about the vaccine given. [2] It also includes a “secure QR code” and a link to CoWIN’s verification portal.
The verification portal allows for the verification of a certificate by scanning the attached QR code. Upon completion, the portal displays a success message along with some of the information printed on the certificate.
Verification is done using a cryptographic mechanism known as digital signatures, which are encoded into the QR code attached to a vaccine certificate. This mechanism allows “offline verification”, which means that the CoWIN verification portal or any private sector app attempting to verify a certificate does not need to contact the CoWIN servers to establish its authenticity. It instead uses a “public key” issued by CoWIN beforehand to verify the digital signature attached to the certificate.
The benefit of this convoluted design is that it protects user privacy. Performing verification offline and not contacting the CoWIN servers, precludes CoWIN from gleaning sensitive metadata about usage of the vaccine certificate. This means that CoWIN does not learn about where and when an individual uses their vaccine certificate, and who is verifying it. This closes off a potential avenue for mass surveillance. [3] However, given how certificate revocation checks are being implemented (detailed in the privacy implications section below), CoWIN ends up learning this information anyway.
Where is digital verification useful?
The primary argument for the adoption of digital verification of vaccine certificates over visual examination of regular paper-based ones is security. In the face of vaccine hesitancy, there are concerns that people may forge vaccine certificates to get around any restrictions that may be put in place on the movement of unvaccinated people. The use of digital signatures serves to allay these fears.
In its current form, however, digital verification of vaccine certificates is no more secure than visually inspecting paper-based ones. While the “secure QR code” attached to digital certificates can be used to verify the authenticity of the certificate itself, the CoWIN verification portal does not provide any mechanism nor does it instruct verifiers to authenticate the identity of the person presenting the certificate. This means that unless an accompanying identity document is also checked, an individual can simply present someone else’s certificate.
There are no simple solutions to this limitation; adding a requirement to inspect identity documents in addition to digital verification of the vaccine certificate would not be a strong enough security measure to prevent the use of duplicate vaccine certificates. People who are motivated enough to forge a vaccine certificate, can also duplicate one of the seven ID documents which can be used to register on CoWIN, some of which are simple paper-based documents. [4] Requiring even stronger identity checks, such as the use of Aadhaar-based biometrics, would make digital verification of vaccine certificates more secure. However, this would be a wildly disproportionate incursion on user privacy — allowing for the mass collection of metadata like when and where a certificate is used — something that digital vaccine certificates were explicitly designed to prevent. Additionally, in Russia, people were found issuing fake certificates by discarding real vaccine doses instead of administering them. No technological solution can prevent such fraud.
As such, the utility of digital certificates is limited to uses such as international travel, where border control agencies already have strong identity checks in place for travellers. Any everyday usage of the digital verification functionality on vaccine certificates would not present any benefit over visually examining a piece of paper or a screen.
Privacy implications of digital certificates
In addition to providing little security utility over manual inspection of certificates, digital certificates also present privacy issues, these are listed below along with recommendations to mitigate them:
(i) The verification portal leaks sensitive metadata to CoWIN’s servers: An analysis of network requests made by the CoWin verification portal reveals that it conducts a ‘revocation check’ each time a certificate is verified. This check was also found in the source code, which is made openly available.
[5]
Revocation checks are an important security consideration while using digital signatures. They allow the issuing authority (CoWIN, in this case) to revoke a certificate in case the account associated with it is lost or stolen, or if a certificate requires correction. However, the way they have been implemented here presents a significant privacy issue. Sending certificate details to the server on every verification attempt allows it to learn about where and when an individual is using their vaccine certificate.
We note that the revocation check performed by the CoWIN portal does not necessarily mean that it is storing this information. Nevertheless, sending certificate information to the server directly contradicts claims of an “offline verification” process, which is the basis of the design of these digital certificates.
Recommendations: Implementing privacy-respecting revocation checks such as Certificate Revocation Lists, [6] or Range Queries [7] would mitigate this issue. However, these solutions are either complex or present bandwidth and storage tradeoffs for the verifier.
(ii) Oversharing of personally identifiable information: CoWIN’s vaccine certificates include more personally identifiable information (name, age, gender, identity document details and unique health ID) than is required for the purpose of verifying the certificate. An examination of the vaccine certificates available to us revealed that while the Aadhaar number is appropriately masked, other personal identifiers such as passport number and unique health ID were not masked. Additionally, the inclusion of demographic details, such as age and gender, provides little security benefit by limiting the pool of duplicate certificates that can be used and are not required in light of the security analysis above.
Recommendation: Personal identifiers (such as passport number and unique health ID) should be appropriately masked and demographic details (age, gender) can be removed.
The minimal set of data required for identity-linked usage for digital verification, as described above, is a full name and masked ID document details. All other personally identifying information can be removed. In case of paper-based certificates, which is suggested for domestic usage, only the details about vaccine validity would suffice and no personal information is required.
(iii) Making information available digitally increases the likelihood of collection: All of the personal information printed on the certificate is also encoded into the QR code. This is necessary because the digital signature verification process also verifies the integrity of this information (i.e. it wasn’t modified). A side effect of this is that the personal information is made readily available in digital form to verifiers when it is scanned, making it easy for them to store. This is especially likely in private sector apps who may be interested in collecting demographic information and personal identifiers to track customer behaviour.
Recommendation: Removing extraneous information from the certificate, as suggested above, mitigates this risk as well.
Conclusion
Our analysis reveals that without incorporating strong, privacy-invasive identity checks, digital verification of vaccine certificates does not provide any security benefit over manually inspecting a piece of paper. The utility of digital verification is limited to purposes that already conduct strong identity checks.
In addition to their limited applicability, in their current form, these digital certificates also generate a trail of data and metadata, giving both government and industry an opportunity to infringe upon the privacy of the individuals using them.
Keeping this in mind, the adoption of this technology should be discouraged for everyday use.
References
[1] Exceptions exist for people without state-issued identity documents.
[2] This information was gathered by inspecting three vaccine certificates linked to the author’s CoWIN account, which they were authorised to view, and may not be fully accurate.
[3] This design is similar to Aadhaar’s “offline KYC” process.
[4] “Aadhaar Card: UIDAI says downloaded versions on ordinary paper, mAadhaar perfectly valid”, Zee Business, April 29 2019, https://www.zeebiz.com/india/news-aadhaar-card-uidai-says-downloaded-versions-on-ordinary-paper-maadhaar-perfectly-valid-96790.
[5] This check was also verified to be present in the reference code made available for private-sector applications incorporating this functionality, suggesting that private sector apps will also be affected by this.
[6] Certificate Revocation Lists allow the server to provide a list of revoked certificates to the verifier, instead of the verifier querying the server each time. This, however, can place heavy bandwidth and storage requirements on the verifying app as this list can potentially grow long.
[7] Range Queries are described in this paper. In this method, the verifier requests revocation status from the server by specifying a range of certificate identifiers within which the certificate being verified lies. If there are any revoked certificates within this range, the server will send their identifiers to the verifier, who can then check if the certificate in question is on the list. For this to work, the range selected must be sufficiently large to include enough potential candidates to keep the server from guessing which one is in use.
Finding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules
This article first appeared on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.
----
Mathew Sag in his 2018 paper on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “content creator” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world.
Who is an Intermediary?
In India the definition of “intermediary” is found under Section 2(w) of the Information Technology (IT) Act 2000, which defines an Intermediary as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”. The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been published periodically (2011, 2018 and 2021). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.
Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in 2011, followed by the introduction of draft amendments to the law in 2018 and finally the latest 2021 version, which would supersede the earlier Rules of 2011.
The Growing Use of Automated Content Moderation
With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.
Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “appropriate mechanisms” and with “appropriate controls”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed.
The lack of clear guidelines and list of content that can be removed had left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the Rules, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined with automated filtering could have resulted in a number of unwarranted take downs.
While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content moderation. Though the use of automated content removal seems like the right step considering the trauma that human moderators had to go through, the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19 wave, the Ministry of Electronics and Information Technology has asked social media platforms to remove “unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols”.
The New IL Rules - A ray of hope?
The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services” .The Rules also go a step further to create another type of intermediary, the significant social media intermediary. A significant social media intermediary is defined as one “having a number of registered users in India above such threshold as notified by the Central Government''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.
Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “appropriate human oversight” as well as a periodic review of the tools used for content moderation. The Rules by using the term “shall endeavor” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means that the requirement is now on a best effort basis, as opposed to the word “shall” in the 2018 version of the Rules, which made it mandatory.
Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “proportionate”, “having regard to free speech” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content online. With the use of proactive filtering through automated means the content can be removed almost immediately. With India’s current trend of new internet users, some of these creators would also be first time users of the internet.
Conclusion
The need for automated removal of content is understandable, based not only on the sheer volume of content but also the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them.
In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators.
Techno-solutionist Responses to COVID-19
The article by Amber Sinha, Pallavi Bedi, and Aman Nair was published in the Economic & Political Weekly, Vol. 56, Issue No. 29, 17 Jul, 2021.
Over the last two decades, slowly but steadily, the governance agenda of the Indian state has moved to the digital realm. In 2006, the National e-Governance Plan (NeGP) was approved by the Indian state wherein a massive infrastructure was developed to reach the remotest corners and facilitate easy access of government services efficiently at affordable costs. The first set of NeGP projects focused on digitalising governance schemes that dealt with taxation, regulation of corporate entities, issuance of passports, and pensions. Over a period of time, they have come to include most interactions between the state and citizens from healthcare to education, transportation to employment, and policing to housing. Upon the launch of the Digital India Mission by the union government, the NeGP was subsumed under the e-Gov and e-Kranti components of the project. The original press release by the central government reporting the approval by the cabinet of ministers of the Digital India programme speaks of “cradle to grave” digital identity as one of its vision areas. This identity was always intended to be “unique, lifelong, online and authenticable.”
Since the inception of the Digital India campaign by the current government, there have been various concerns raised about the privacy issues posed by this project. The initiative includes over 50 “mission mode projects” in various stages of implementation. All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies. There is also a lack of properly articulated access control mechanism and doubts exist over important issues such as data ownership owing to most projects involving public–private partnership which involves a private organisation collecting, processing and retaining large amounts of data. Most importantly, they have continued to exist and prosper in a state of regulatory vacuum with no data protection legislation to govern them. Further, the state of digital divide and digital literacy in India should automatically underscore the need to not rely solely on digital solutions.
Click to read the full article here
Facial Recognition Technology in India
Executive Summary
Over the past two decades there has been a sustained effort at digitising India’s governance structure in order to foster development and innovation. The field of law enforcement and safety has seen significant change in that direction, with technological tools such as Closed Circuit Television (CCTV) and Facial Recognition Technology (FRT) increasingly being deployed by the government.
Yet for all its increased use, there is still a lack of a coherent legal and regulatory framework governing FRT in India. Towards informing such a framework, this paper seeks to document present uses of FRT in India, specifically by law enforcement agencies and central and state governments, understand the applicability of existing legal frameworks to the use of FRT, and define key areas that need to be addressed when using the technology in India. We also briefly look at how the coverage of FRT has increased beyond law enforcement; it now covers educational institutions, employment purposes, and it is now being used for providing Covid-19 vaccines.
We begin by examining use cases of FRT systems by various divisions of central and state governments. In doing so, it becomes apparent that there is a lack of uniform standards or guidelines at either the state or central level - leading to different FRT systems having differing standards of applicability and scope of use. And while the use of such systems seems to be growing at a rapid rate, questions around their legality persist.
It is unclear whether the use of FRT is compliant with the fundamental right to privacy as affirmed by the Supreme Court in 2017 in Puttaswamy. While the right to privacy is not an absolute right, for the state to curtail this right, the restrictions will have to comply with a three-fold requirement— first, being the need for explicit legislative mandate in instances where the government looks to curtail the right. However, the FRT systems we have analysed do not have such a mandate and are often the result of administrative or executive decisions with no legislative blessing or judicial oversight.
We further locate the use of FRT technology within the country’s wider legislative, judicial and constitutional frameworks governing surveillance. We also briefly articulate comparative perspectives on the use of FRT in other jurisdictions. We further analyse the impact of the proposed Personal Data Protection Bill on the deployment of FRT. Finally, we propose a set of recommendations to develop a path forward for the technology’s use which include the need for a comprehensive legal and regulatory framework that governs the use of FRT. Such a framework must take into consideration the necessity of use, proportionality, consent, security, retention, redressal mechanisms, purpose limitation, and other such principles. Since the use of FRT in India is also at a nascent stage, it is imperative that there is greater public research and dialogue into its development and use to ensure that any harms that may arise in the field are mitigated.
Click to download the entire research paper here
A Guide to Drafting Privacy Policy under the Personal Data Protection Bill, 2019
The Bill in its current form, doesn’t have explicit transitory provisions i.e. a defined timeline for the enforcement of the provisions of the Bill post its notification as an enforceable legislation. Since the necessary subject matter expertise may be limited on short notice and out of budget for certain companies, we intend to release a series of guidance documents that will attempt to simplify the operational requirements of the legislation.
Certain news reports had earlier suggested that the Joint Parliamentary Committee reviewing the Bill has proposed 89 new amendments and a new clause. The nature and content of these amendments so far remain unclear. However, we intend to start the series by addressing some frequently asked questions around meeting the requirements of publishing a privacy notice and shall make the relevant changes post notification of the new Bill. The solutions provided in this guidance document are mostly based on international best practices and any changes in the solutions based on Indian guidelines and the revised PDP Bill will be redlined in the future.
The frequently asked questions and other specific examples on complying with the requirements of publishing a privacy policy have been compiled based on informal discussions with stakeholders, unsolicited queries from smaller organizations and publicly available details from conferences on the impact of the Bill. We intend to conduct extensive empirical analysis of additional queries or difficulties faced by smaller organizations towards achieving compliance post the notification of the new Bill. Regardless, any smaller organizations(NGOs, start-ups etc.) interested in discussing compliance related queries can get in touch with us.
Click to download the full report here. The report was reviewed by Pallavi Bedi and Amber Sinha.
The Geopolitics of Cyberspace: A Compendium of CIS Research
With a rapidly digitizing economy and clear interests in shaping global rules that favour its strategic interests, India stands at a crucial juncture on various facets of this debate. How India governs and harnesses technology, coupled with how India translates these values and negotiates its interests globally, will surely have an impact on how similarly placed emerging economies devise their own strategies. The challenge here is to ensure that domestic technology governance as well as global engagements genuinely uphold and further India’s democratic fibre and constitutional vision.
Since 2018, researchers at the Centre for Internet and Society have produced a body of research including academic writing, at the intersection of geopolitics and technology covering global governance regimes on trade and cybersecurity, including their attendant international law concerns, the digital factor in bilateral relationships (with a focus on the Indo-US and Sino-Indian relationships). We have paid close focus to the role of emerging technologies in this debate, including AI and 5G as well as how private actors in the technology domain, operating across national jurisdictions, are challenging and upending traditionally accepted norms of international law, global governance, and geopolitics.
The global fissures in this space matter fundamentally for individuals who increasingly use digital spaces to carry out day to day activities: from being unwitting victims of state surveillance to harnessing social media for causes of empowerment to falling prey to state-sponsored cyber attacks, the rules of cyber governance, and its underlying politics. Yet, the rules are set by a limited set of public officials and technology lawyers within restricted corridors of power. Better global governance needs more to be participatory and accessible. CIS’s research and writing has been cognizant of this, and attempted to merge questions of global governance with constitutional and technical questions that put individuals and communities centre-stage.
Research and writing produced by CIS researchers and external collaborators from 2018 onward is detailed in the appended compendium.
Compendium
Global cybersecurity governance and cyber norms
Two decades since a treaty governing state behaviour in cyberspace was mooted by Russia, global governance processes have meandered along. The security debate has often been polarised along “Cold War” lines but the recent amplification of cyberspace governance as developmental, social and economic has seen several new vectors added to this debate. This past year two parallel processes at the United Nations General Assembly’s First Committee on Disarmament and International Security-United Nations Group of Governmental Experts (UN-GGE) and the United Nations Open Ended Working Group managed to produce consensus reports but several questions on international law, norms and geopolitical co-operation remain. India has been a participant at these crucial governance debates. Both the substance of the contribution, along with its implications remain a key focus area for our research.
Edited Volumes
- Karthik Nachiappan and Arindrajit Basu India and Digital World-Making, Seminar 731, 1 July 2020 (featuring contributions from Manoj Kewalramani, Gunjan Chawla, Torsha Sarkar, Trisha Ray, Sameer Patil, Arun Vishwanathan, Vidushi Marda, Divij Joshi, Asoke Mukerji, Pallavi Raghavan, Karishma Mehrotra, Malavika Raghavan, Constantino Xavier, Rajen Harshe' and Suman Bery)
Long-Form Articles
- Arindrajit Basu and Elonnai Hickok, Cyberspace and External Affairs: A Memorandum for India (Memorandum, Centre for Internet and Society, 30 Nov 2018)
- The Potential for the Normative Regulation of Cyberspace (White Paper, Centre for Internet and Society, 30 July 2018)
- Arindrajit Basu and Elonnai Hickok Conceptualizing an International Security Architecture for cyberspace (Briefings of the Global Commission on the Stability of Cyberspace, Bratislava, Slovakia, May 2018)
- Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah, and Akriti Bopanna, India's contribution to internet governance debates (NLUD Student Law Journal, 2018)
Blog Posts and Op-eds
- Arindrajit Basu, Irene Poetranto, and Justin Lau, The UN struggles to make progress in cyberspace, Carnegie Endowment for International Peace, May 19th, 2021
- Andre’ Barrinha and Arindrajit Basu, Could cyber diplomacy learn from outer space, EU Cyber Direct, 20th April 2021
- Arindrajit Basu and Pranesh Prakash, Patching the gaps in India’s cybersecurity, The Hindu, 6th March 2021
- Arindrajit Basu and Karthik Nachiappan, Will India negotiate in cyberspace?, Leiden Security and Global Affairs blog,December 16, 2020
- Elizabeth Dominic, The debate over internet governance and cybercrimes: West vs the rest?, Centre for Internet and Society, June 08, 2020
- Arindrajit Basu, India’s role in Global Cyber Policy Formulation, Lawfare, Nov 7, 2019
- Pukhraj Singh, Before cyber norms,let's talk about disanalogy and disintermediation, Centre for Internet and Society, Nov 15th, 2019
- Arindrajit Basu and Karan Saini, Setting International Norms of Cyber Conflict is Hard, But that Doesn’t Mean that We Should Stop Trying, Modern War Institute, 30th Sept, 2019
- Arindrajit Basu, Politics by other means: Fostering positive contestation and charting red lines through global governance in cyberspace (Digital Debates, Volume 6, 2019)
- Arindrajit Basu, Will the WTO Finally Tackle the ‘Trump’ Card of National Security? (The Wire, 8th May 2019)
Policy Submissions
- Arindrajit Basu, CIS Submission to OEWG (Centre for Internet and Society, Policy Submission, 2020)
- Aayush Rathi, Ambika Tandon, Elonnai Hickok, and Arindrajit Basu. “CIS Submission to UN High-Level Panel on Digital Cooperation.” Policy submission. Centre for Internet and Society, January 2019.
- Arindrajit Basu,Gurshabad Grover, and Elonnai Hickok. “Response to GCSC on Request for Consultation: Norm Package Singapore.” Centre for Internet and Society, January 17, 2019.
- Arindrajit Basu and Elonnai Hickok. Submission of Comments to the GCSC Definition of ‘Stability of Cyberspace (Centre for Internet and Society, September 6, 2019)
Digital Trade and India's Political Economy
The modern trading regime and its institutions were born largely into a world bereft of the internet and its implications for cross-border flow and commerce. Therefore, regulatory ambitions at the WTO have played catch up with the technological innovation that has underpinned the modern global digital economy. Driven by tech giants, the “developed” world has sought to restrict the policy space available to the emerging world to impose mandates regarding data localisation, source code disclosure, and taxation - among other initiatives central to development. At the same time emerging economies have pushed back, making for a tussle that continues to this day. Our research has focussed both on issues of domestic political economy and data governance,and the implications these domestic issues have on how India and other emerging economies negotiate at the world stage.
Long-Form articles and essays
- Arindrajit Basu, Elonnai Hickok and Aditya Chawla, The Localisation Gambit: Unpacking policy moves for the sovereign control of data in India (Centre for Internet and Society, March 19, 2019)
- Arindrajit Basu,Sovereignty in a datafied world: A framework for Indian diplomacy in Navdeep Suri and Malancha Chakrabarty (eds) A 2030 Vision for India’s Economic Diplomacy (Observer Research Foundation 2021)
- Amber Sinha, Elonnai Hickok, Udbhav Tiwari and Arindrajit Basu, Cross Border Data-Sharing and India (Centre for Internet and Society, 2018)
Blog posts and op-eds
- Arindrajit Basu, Can the WTO build consensus on digital trade, Hinrich Foundation,October 05,2021
- Amber Sinha, The power politics behind Twitter versus Government of India, The Wire, June 03, 2021
- Karthik Nachiappan and Arindrajit Basu, Shaping the Digital World, The Hindu, 30th July 2020
- Arindrajit Basu and Karthik Nachiappan, India and the global battle for data governance, Seminar 731, 1st July 2020
- Amber Sinha and Arindrajit Basu, Reliance Jio-Facebook deal highlights India’s need to revisit competition regulations, Scroll, 30th April 2020
- Arindrajit Basu and Amber Sinha, The realpolitik of the Reliance-Jio Facebook deal, The Diplomat, 29th April 2020
- Arindrajit Basu, The Retreat of the Data Localization Brigade: India, Indonesia, Vietnam, The Diplomat, Jan 10, 2020
- Amber Sinha and Arindrajit Basu, The Politics of India’s Data Protection Ecosystem, EPW Engage, 27 Dec 2019
- Arindrajit Basu and Justin Sherman, Key Global Takeaways from India’s Revised Personal Data Protection Bill, Lawfare, Jan 23, 2020
- Nikhil Dave,“Geo-Economic Impacts of the Coronavirus: Global Supply Chains.” Centre for Internet and Society , June 16, 2020.
International Law and Human Rights
International law and human rights are ostensibly technology neutral, and should lay the edifice for digital governance and cybersecurity today. Our research on international human rights has focussed on global surveillance practices and other internet restrictions employed by a variety of nations, and the implications this has for citizens and communities in India and similarly placed emerging economies. CIS researchers have also contributed to, and commented on World Intellectual Property Organization negotiations at the intersection of international Intellectual Property (IP) rules and the human rights.
Long-form article
- Arindrajit Basu, Extra Territorial Surveillance and the incapacitation of international human rights law, 12 NUJS LAW REVIEW 2 (2019)
- Gurshabad Grover and Arindrajit Basu, ”Internet Blockage”(Scenario contribution to NATO CCDCOE Cyber Law Toolkit,2021)
- Arindrajit Basu and Elonnai Hickok, Conceptualizing an international framework for active private cyber defence (Indian Journal of Law and Technology, 2020)
- Arindrajit Basu,Challenging the dogmatic inevitability of extraterritorial state surveillance in Trisha Ray and Rajeswari Pillai Rajagopalan (eds) Digital Debates: CyFy Journal 2021 (New Delhi:ORF and Global Policy Journal,2021)
Blog Posts and op-eds
- Arindrajit Basu, “Unpacking US Law And Practice On Extraterritorial Mass Surveillance In Light Of Schrems II”, Medianama, 24th August 2020
- Anubha Sinha, “World Intellectual Property Organisation: Notes from the Standing Committee on Copyright Negotiations (Day 1, Day 2, Day 3 and 4)”, July 2021
- Raghav Ahooja and Torsha Sarkar,How (not) to regulate the internet:Lessons from the Indian Subcontinent,Lawfare,September 23,2021,
Bilateral Relationships
Technology has become a crucial factor in shaping bilateral and plurilateral co-operation and competition. Given the geopolitical fissures and opportunities since 2020, our research has focussed on how technology governance and cybersecurity could impact the larger ecosystem of Indo-China and India-US relations. Going forward, we hope to undertake more research on technology in plurilateral arrangements, including the Quadrilateral Security Dialogue.
- Arindrajit Basu and Justin Sherman, The Huawei Factor in US-India Relations,The Diplomat, 22 March 2021
- Aman Nair, “TIkTok: It’s Time for Biden to Make a Decision on His Digital Policy with China,” Centre for Internet and Society, January 22, 2021,
- Arindrajit Basu and Gurshabad Grover, India Needs a Digital Lawfare Strategy to Counter China, The Diplomat, 8th October 2020
- Anam Ajmal, The app ban will have an impact on the holding companies...global power projection begins at home, Times of India, July 7th, 2020 (Interview with Arindrajit Basu)
- Justin Sherman and Arindrajit Basu, Trump and Modi embrace, but remain digitally divided, The Diplomat, March 05th, 2020
Emerging Technologies
Governance needs to keep pace with the technological challenges posed by emerging technologies, including 5G and AI. To do so an interdisciplinary approach that evaluates these scientific advances in line with the regimes that govern them is of utmost importance. While each country will need to regulate technology through the lens of their strategic interests and public policy priorities, it is clear that geopolitical tensions on standard-setting and governance models compels a more global outlook.
Long-Form reports
- Anoushka Soni and Elizabeth Dominic, Legal and Policy implications of Autonomous weapons systems (Centre for Internet and Society, 2020)
- Aayush Rathi, Gurshabad Grover, and Sunil Abraham, Regulating the internet: The Government of India & Standards Development at the IETF (Centre for Internet and Society, 2018)
Blog posts and op-eds
- Aman Nair, Would banning Chinese telecom companies make India 5G secure in India? Centre for Internet and Society, 22nd December 2020
- Arindrajit Basu and Justin Sherman, Two New Democratic Coalitions on 5G and AI Technologies, Lawfare, 6th August 2020
- Nikhil Dave, The 5G Factor: A Primer, Centre for Internet and Society, July 20, 2020.
- Gurshabad Grover, The Huawei bogey Indian Express, May 30th, 2019
- Arindrajit Basu and Pranav MB, What is the problem with 'Ethical AI'?:An Indian perspective, Centre for Internet and Society, July 21, 2019
(This compendium was drafted by Arindrajit Basu with contributions from Anubha Sinha. Aman Nair, Gurshabad Grover, and Pranav MB reviewed the draft and provided vital insight towards its conceptualization and compilation. Dishani Mondal and Anand Badola provided important inputs at earlier stages of the process towards creating this compendium)
International Cyber Law Toolkit scenario: Internet blockage
Arindrajit Basu and Gurshabad Grover contribute a scenario and a legal analysis to the International Cyber Law in Practice Toolkit.
As per it’s website.
The Cyber Law Toolkit is a dynamic interactive web-based resource for legal professionals who work with matters at the intersection of international law and cyber operations. The Toolkit may be explored and utilized in a number of different ways. At its core, it presently consists of 24 hypothetical scenarios. Each scenario contains a description of cyber incidents inspired by real-world examples, accompanied by detailed legal analysis. The aim of the analysis is to examine the applicability of international law to the scenarios and the issues they raise.
A summary of the contribution:
In response to widespread protests, a State takes measures to isolate its domestic internet networks from connecting with the global internet. These actions also lead to a massive internet outage in the neighbouring State, whose internet access was contingent on interconnection with a large network in the former State. The analysis considers whether the first State’s actions amount to violations of international law, in particular with respect to the principle of sovereignty, international human rights law, international telecommunication law and the responsibility to prevent transboundary harm.
You can read the full scenario and analysis here.
The press release by NATO CCDCOE announcing the September 2021 update may be accessed here.
Beyond the PDP Bill: Governance Choices for the DPA
The Personal Data Protection Bill, 2019, was introduced in the Lok Sabha on 11 December 2019. It lays down an overarching framework for personal data protection in India. Once revised and approved by Parliament, it is likely to establish the first comprehensive data protection framework for India. However, the provisions of the Bill are only one component of the forthcoming data protection framework It further proposes setting up the Data Protection Authority (DPA) to oversee the final enforcement, supervision, and standard-setting. The Bill consciously chooses to vest the responsibility of administering the framework with a regulator instead of a government department. As an independent agency, the DPA is expected to be autonomous from the legislature and the Central Government and capable of making expert-driven regulatory decisions in enforcing the framework.
Furthermore, the DPA is not merely an implementing authority; it is also expected to develop privacy regulations for India by setting standards. As such, it will set the day-to-day obligations of regulated entities under its supervision. Thus, the effectiveness with which it carries out its functions will be the primary determinant of the impact of this Bill (or a revised version thereof) and the data protection framework set out under it.
The final version for the PDP Bill may or may not provide the DPA with clear guidance regarding its functions. In this article, we emphasise the need to look beyond the Bill and instead examine the specific governance choices the DPA must deliberate on vis-à-vis its standard-setting function, which are distinct from those it will encounter as part of its enforcement and supervision functions.
A brief timeline of the genesis of a distinct privacy regulator for India
The vision of an independent regulator for data protection in India emerged over the course of several intervening processes that set out to revise India’s data protection laws. In fact, the need for a dedicated data protection regulation for India, with enforceable obligations and rights, was debated years before the Aadhaar, Cambridge Analytica, and Pegasus revelations captured the public imagination and mainstreamed conversations on privacy.
The Right to Privacy Bill, 2011, which never took off, recognised the right to privacy in line with Article 21 of the Constitution of India, which pertains to the right to life and personal liberty. The Bill laid down express conditions for collecting and processing data and the rights of data subjects. It also proposed setting up a Data Protection Authority (DPA) to supervise and enforce the law and advise the government in policy matters. Upon review by the Cabinet, it was suggested that the Authority be revised to an Advisory Council, given its role under the Bill was limited.
Subsequently, in 2012, the AP Shah Committee Report recommended a principle-based data protection law, focusing on set standards while refraining from providing granular rules, to be enforced through a co-regulatory structure. This structure would consist of central and regional-level privacy commissioners, self-regulatory bodies, and data protection officers appointed by data controllers. There were also a few private members’ bills introduced between 2011 and 2019.
None of these efforts materialised, and the regulatory regime for data protection and privacy remained embedded within the Information Technology Act, 2000, and the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules). Though the SPDI Rules require body corporates to secure personal data, their enforcement is limited to cases of negligence in abiding by these limited set of obligations pertaining to sensitive personal information only, and which have caused wrongful loss or gain – a high threshold to prove for aggrieved individuals. Otherwise, the Intermediary Guidelines, 2011 require all intermediaries to generally follow these Rules under Rule 3(8). The enforcement of these obligations is entrusted to adjudicating officers (AO) appointed by the central government, who are typically bureaucrats appointed as AOs in an ex-officio capacity.
By 2017, the Aadhaar litigations had provided additional traction to the calls for a dedicated and enforceable data protection framework in India. In its judgement, the Supreme Court recognised the right to privacy as a fundamental right in India and stressed the need for a dedicated data protection law. Around the same time, the Ministry of Electronics and Information Technology (MeitY) constituted a committee of experts under the chairmanship of Justice BN Srikrishna. The Srikrishna Committee undertook public consultations on a 2017 white paper, which culminated in the nearly comprehensive Personal Data Protection Bill, 2018, and an accompanying report. This 2018 Bill outlined a regulatory framework of personal data processing for India and defined data processing entities as fiduciaries, which owe a duty of care to individuals to whom personal data relates. The Bill provided for the setting up of an independent regulator that would, among other things, specify further standards for data protection and administer and enforce the provisions of the Bill.
MeitY invited public comments on this Bill and tabled a revised version, the Personal Data Protection Bill, 2019 (PDP Bill), in the Lok Sabha in December 2019. Following public pressure calling for detailed discussions on the Bill before its passing, it was referred to a Joint Parliamentary Committee (JPC) constituted for this purpose. It currently remains under review; the JPC is reportedly expected to table its report in the 2021 Winter Session of Parliament. Though the Bill is likely to undergo another round of revisions following the JPC’s review, this is the closest India has come to realising its aspirations of establishing a dedicated and enforceable data protection framework.
This Bill carries forward the choice of a distinct regulatory body, though questions remain on the degree of its independence, given the direct control granted to the central government in appointing its members and funding the DPA.
Conceptualising an Independent DPA
The Srikrishna Committee’s 2017 white paper and its 2018 report on the PDP Bill discuss the need for a regulator in the context of enforcement of its provisions. However, the DPA under the PDP Bill is tasked with extensive powers to frame detailed regulations and codes of conduct to inform the day-to-day obligations of data fiduciaries and processors. To be clear, the standard-setting function for a regulator entails laying down the standards based on which regulated entities (i.e. the data fiduciaries) will be held accountable, and the manner in which they may conduct themselves while undertaking the regulated activity (i.e. personal data processing). This is in addition to its administrative and enforcement, and quasi-judicial functions, as outlined below:
Functions of the DPA under the PDP Bill 2019
At this stage, it is important to note that the choice of regulation via a regulator is distinct from the administration of the Bill by the central or state governments. Creating a distinct regulatory body allows government procedures to be replaced with expert-driven decision-making to ensure sound economic regulation of the sector. At the same time, the independence of the regulatory authority insulates it from political processes. The third advantage of independent regulatory authorities is the scope for ‘operational flexibility’, which is embodied in the relative autonomy of its employees and its decision-making from government scrutiny.
This is also the rationale provided by the Srikrishna Committee in stating their choice to entrust the administration of the data protection law to an independent DPA. The 2017 white paper that preceded the 2018 Srikrishna Committee Report proposed a distinct regulator to provide expert-driven enforcement of laws for the highly specialised data protection sphere. Secondly, the regulator would serve as a single point of contact for entities seeking guidance and will ensure consistency by issuing rules, standards, and guidelines. The Srikrishna Committee Report concretised this idea and proposed a sector-agnostic regulator that is expected to undertake expertise-driven standard-setting, enforcement, and adjudication under the Bill. The PDP Bill carries forward this conception of a DPA, which is distinct from the central government.
Conceptualised as such, the DPA has a completely new set of questions to contend with. Specifically, regulatory bodies require additional safeguards to overcome the legitimacy and accountability questions that arise when law-making is carried out not by elected members of the legislature, but via the unelected executive. The DPA would need to incorporate democratic decision-making processes to overcome the deficit of public participation in an expert-driven body. Thus, the meta-objective of ensuring autonomous, expertise-driven, and legitimate regulation of personal data processing necessitates that the regulator has sufficient independence from political interference, is populated with subject matter experts and competent decision-makers, and further has democratic decision-making procedures.
Further, the standard-setting role of the regulator does not receive sufficient attention in terms of providing distinct procedural or substantive safeguards either in the legislation or public policy guidance.
Reconnaissance under the PDP Bill: How well does it guide the DPA?
At this time, the PDP Bill is the primary guidance document that defines the DPA and its overall structure. India also lacks an overarching statute or binding framework that lays down granular guidance on regulation-making by regulatory agencies.
The PDP Bill, in its current iteration, sets out skeletal provisions to guide the DPA in achieving its objectives. Specifically, the Bill provides guidance limited to the following:
- Parliamentary scrutiny of regulations: The DPA must table all its regulations before the Parliament. This is meant to accord legislative scrutiny to binding legal standards promulgated by unelected officials.
- Consistency with the Act: All regulations should be consistent with the Act and the rules framed under it. This integrates a standard of administrative law to a limited extent within the regulation-making process.
However, India’s past track record indicates that regulations, once tabled before the Parliament, are rarely questioned or scrutinised. Judicial review is typically based on ‘thin’ procedural considerations such as whether the regulation is unconstitutional, arbitrary, ultra vires, or goes beyond the statutory obligations or jurisdiction of the regulator. In any event, judicial review is possible only when an instrument is challenged by a litigant, and, therefore, it may not always be a robust ex-ante check on the exercise of this power. A third challenge arises where instruments other than regulations are issued by the regulator. These could be circulars, directions, guidelines, and even FAQs, which are rarely bound by even the minimal procedural mandate of being tabled before the Parliament. To be sure, older regulators including the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) also face similar issues, which they have attempted to address through various methods including voluntary public consultations, stakeholder meetings, and publication of minutes of meetings. These are useful tools for the DPA to consider as well.
Apart from these, specific guidance is provided with respect to issuing and approving codes of practice and issuing directions as follows:
- Codes of practice: The DPA is required to (i) ensure transparency,[1] (ii) consult with other sectoral regulators and stakeholders, and (iii) follow a procedure to be prescribed by the central government prior to the notification of codes of practice under the Bill.[2]
- Directions: The DPA may issue directions to individual, regulated entities or their classes from time to time, provided these entities have been given the opportunity to be heard by the DPA before such directions are issued.[3]
However, the meaning of transparency and the process for engaging with sectoral regulators remains unspecified under the Bill. Furthermore, the central government has been provided vast discretion to formulate these procedures, as the Bill does not specify the principles or outcomes sought to be achieved via these procedures. The Bill also does not specify instances where such directions may be issued and in which form.
Thus, as per its last publicly available iteration, the Bill remains silent on the following:
- The principles that may guide the DPA in its functioning.
- The procedure to be followed for issuing regulations and other subordinate legislation under the Bill.
- The relevant regulatory instruments, other than regulations and codes of practice – such as circulars, guidelines, FAQs, etc. – that may be issued by the DPA.
- The specifics regarding the members and employees within the DPA who are empowered to make these regulations.
It is unclear whether the JPC will revise the DPA’s structure or recommend statutory guidance for the DPA in executing any of its functions. This is unlikely, given that parent statutes for other regulators typically omit such guidance. As a result, the DPA may be required to make intentional and proactive choices on these matters, much like their regulatory counterparts in India. These are discussed in the section below.
Envisaging a Proactive Role for the DPA
As the primary regulatory body in charge of the enforcement of the forthcoming data protection framework, what should be the role of the DPA in setting standards for data protection?
The complexity of the subject matter, and the DPA’s role as the frontline body to define day-to-day operational standards for data protection for the entire digital economy, necessitates that it develop transparent guiding principles and procedures. Furthermore, given that the DPA’s autonomy and capacity are currently unclear, the DPA will need to make deliberate choices regarding how it conducts itself. In this regard, the skeletal nature of the PDP Bill also allows the DPA to determine its own procedures to carry out its tasks effectively.
This is not uncommon in India: various regulators have devised frameworks to create benchmarks for themselves. The Airports Economic Regulatory Authority (AERA) is obligated to follow a dedicated consultation process as per an explicit transparency mandate under the parent statute. However, the Insolvency and Bankruptcy Board of India (IBBI) has, on its own initiative, formulated regulations to guide its regulation-making functions. In other cases, consultation processes have been integrated into the respective framework through judicial intervention: the Telecom Regulatory Authority of India (TRAI) has been mandated to undertake consultations through judicial interpretation of the requirement for transparency under the Telecom Regulatory Authority of India Act, 1997 (TRAI Act).
In this regard, we develop a list of considerations that the DPA should look to address while carrying out its standard-setting functions. We also draw on best practices by Indian regulators and abroad, which can help identify feasible solutions for an effective DPA for India.
The choice of regulatory instruments
The DPA is empowered to issue regulations, codes of practice, and directions under the Bill. At the same time, regulators in India routinely issue other regulatory instruments to assign obligations and clarify them. Some commonly used regulatory instruments are outlined below. The terms used for instruments are not standard across regulators, and the list and description set out below outline the main concepts and not fixed labels for the instruments.
Overview of regulatory instruments
|
Circulars and Master Circulars |
Guidelines |
FAQs |
Directions |
Content |
Circulars are used to prescribe detailed obligations and prohibitions for regulated entities and can mimic regulations. Master circulars consolidate circulars on a particular topic periodically. |
These may be administrative or substantive, depending on the practice of the regulator in question. |
Issued in public interest by regulators to clarify the regulatory framework administered by them. They cannot prescribe new standards or create obligations. |
Issued to provide focused instructions to individual entities or class of entities in response to an adjudicatory action or in lieu of a current challenge. |
Binding character |
They are generally binding in the same manner as regulations and rules. However, if they go beyond the parent Act or existing rules and regulations, they may be struck down following a judicial review. |
They may or may not be binding depending upon the language employed or the regulator’s practice. |
Unclear whether these are binding and to what extent. However, crucial clarifications on important concepts sometimes emerge from FAQs. |
Binding in respect of the class of regulated entities to whom this is issued. |
Parliamentary scrutiny |
Unlike regulations, these do not have to be laid before the Parliament. |
Thus, all these instruments, to varying degrees, have been used to create binding obligations for regulated entities. The choice of regulatory instrument is not made systematically. Indeed, even a hierarchy of instruments and their functions are not clearly set out by most regulators. The rationale for deciding why a circular is issued as against a regulation is also unclear. A study on regulatory performance in India by Burman and Zaveri (2018) has highlighted an over-reliance on instruments such as circulars. As per their study, between 2014 and 2016, RBI and SEBI issued 1,016 and 122 circulars, as against 48 and 51 regulations, respectively. These circulars are not bound by the same pre-consultative mandate nor are they mandated to be laid before the Parliament. While circulars may have been intended for routine to routinely used to lay down administrative or procedural requirements, the study narrows its frame of reference to circulars which lay down substantive regulatory requirements. In this instance, it is unclear why parliamentary scrutiny is mandated for regulations alone, and not for instruments like circulars and directions, even though they lay down similarly substantive requirements. Furthermore, there have also been instances where certain instruments like FAQs have gone beyond their advisory scope to provide new directions or definitions that were not previously shared under binding instruments like regulations or circulars.
The DPA has been provided specific powers to issue regulations, codes of practice, and directions. However, the rationale for issuing one instead of the other has been absent from the PDP Bill so far. In such a scenario, it is important that the DPA transparently outlines the types of instruments it wishes to use, whether they are binding or advisory, and the procedure to be followed for issuing each.
Pre-legislative consultative rule-making
Participatory and consultative processes have emerged as core components of democratic rule-making by regulators. Transparent consultative mechanisms could also ameliorate capacity challenges in a new regulator (particularly for technical matters) and help enhance public confidence in the regulator.
In India, several regulators have adopted consultation mechanisms even when there is no specific statutory requirement. SEBI and IBBI routinely issue discussion papers and consultation papers. The RBI also issues draft instruments soliciting comments. As discussed previously, TRAI and AERA have distinct transparency mandates under which they carry out consultations before issuing regulations. However, these processes are not mandated all forms of subordinate legislation. Taking cognizance of this, the Financial Sector Legislative Reform Committee (FSLRC) has recommended transparency in the regulation-making process. This was carried forward by the Financial Stability and Development Council (FSDC), which recommended that consultation processes should be a prerequisite for all subordinate legislations, including circulars, guidelines, etc. A study on regulators’ adherence to these mandates, spanning TRAI, AERA, SEBI, and RBI, demonstrated that this pre-consultation mandate is followed inconsistently, if at all. Predictable consultation practices are therefore critical.
Furthermore, the study stated that it could not determine whether the consultation processes yielded meaningful participation, given that regulators are not obligated to disclose how public feedback was integrated into the rule-making process. Subordinate legislations issued in the form of circulars and guidelines also do not typically undergo the same rigorous consultation processes. Thus, an ideal consultation framework would comprise:
- Publication of the draft subordinate legislation along with a detailed explanation of the policy objectives. Further, the regulator should publish the internal or external studies conducted to arrive at the proposed legislation to engender meaningful discussion.
- Permitting sufficient time for the public and interested stakeholders to respond to the draft.
- Publishing all feedback received for the public to assess, and allowing them to respond to the feedback.
However, beyond specifying the manner of conducting consultations, it will be important for the DPA to determine where they are mandatory and binding, and for which type of subordinate legislations. These are discussed in the next section.
Choice of consultation mandates for distinct regulatory instruments
While the Bill provides for consultation processes for issuing and approving codes of practice, no such mechanism has been set out for other instruments. Nevertheless, specifying consultation mandates for different regulatory instruments is important to ensure that decision-making is consistent and regulation-making remains bound by transparent and accountable processes. As discussed above, regulatory instruments such as circulars and FAQs are not necessarily bound by the same consultation mandates in India. This distinction has been clarified in more sophisticated administrative law frameworks abroad. For instance, under the Administrative Procedures Act in the United States (US), all substantive rules made by regulatory agencies are bound by a consultation process, which requires notice of the proposed rule-making and public feedback. This does not preclude the regulatory agency from issuing clarifications, guidelines, and supplemental information on the rules issued. These documents do not require the consultation process otherwise required for formal rules. However, they cannot be used to expand the scope of the rules, set new legal standards, or have the effect of amending the rules. Nevertheless, agencies are not precluded from choosing to seek public feedback on such documents.
Similarly, the Information Commissioner’s Office in the United Kingdom (UK) takes into consideration public consultations and surveys while issuing toolkits and guidance for regulated entities on how to comply with the data protection framework in the UK.
Here, the DPA may choose to subject strictly binding instruments like regulations and codes of practice to pre-legislative consultation mandates, while softer mechanisms like FAQs may be subject to the publication of a detailed outline of the policy objective or online surveys to invite non-binding, advisory feedback. For each of these, the DPA will nonetheless need to create specific criteria by which it classifies instruments as binding and advisory, and further outline specific pre-legislative mandates for each category.
Framework for issuing regulatory instruments and instructions
While the DPA is likely to issue several instruments, the system based on which these instruments will be issued is not yet clear. Without a clearly thought-out framework, different departments within the regulator typically issue a series of directions, circulars, regulations, and other instruments. This raises questions regarding the consistency between instruments. This also requires stakeholders to go through multiple instruments to find the position of law on a given issue. Older Indian regulators are now facing challenges in adapting their ad hoc system into a framework. For example, the RBI currently issues a series of circulars and guidelines that are periodically consolidated on a subject-matter basis as Master Circulars and Master Directions. These are then updated and published on their website. IBBI also publishes handbooks and information brochures that consolidate instruments in an accessible manner.
While these are useful improvements, these practices cannot keep pace with rapid changes in regulatory instructions and are not complete or user-friendly (for example, the subject-matter based consolidation does not allow for filtering regulatory instructions by entity). Other jurisdictions have developed different techniques such as formal codification processes to consolidate regulations issued by government agencies under one unified code, register, or handbook, websites that allow for searches based on different parameters (subject-matter, type of instrument, chronology, entity-based), and guides tailored to different types of entities. The DPA, as a new regulator, can learn from this experience and adopt a consistent framework right from the beginning.
Further, an ethos of responsive regulation also requires the DPA to evaluate and revise directions and regulations periodically, in response to market and technology trends. A commitment to periodic evaluation of subordinate legislations entrenched in the rules is critical to reducing the dependence on officials and leadership, which may change. For instance, the IBBI has set out a mandatory review of regulations issued by it every three years.
Dedicating capacity for drafting subordinate legislations
The DPA has been granted the discretion to appoint experts and staff its offices with the personnel it needs. A study of European data protection authorities shows that by the time the General Data Protection Regulation, 2016 became effective, most of the authorities increased the number of employees with some even reporting a 240% increase. The annual spending on the authorities also went up for most countries. While these authorities do not necessarily frame subordinate legislations, they nonetheless create guidance toolkits and codes of practice as part of their supervisory functions.
In this regard, the DPA will need to ensure it has dedicated capacity in-house to draft subordinate legislations. Since regulators are generally seen as enforcement authorities, there is inadequate investment in capacity-building for drafting legislations in India.
Moreover, considering the multiplicity of instruments and guidance documents the DPA is expected to issue, it may seek to create templates for these instruments, along with compulsory constituents of different types of instruments. For instance, the Office of the Australian Information Commissioner is required to include a mandatory set of components while issuing or approving binding industry codes of practice.
Conclusion
The Personal Data Protection Bill, 2019 (in the final form recommended by the JPC and accepted by the MeitY) will usher in a new chapter in India’s data protection timeline. While the Bill will finally effectuate a nearly comprehensive data protection framework for India, it will also establish a new regulatory framework that sets up a new regulator, the DPA, to oversee the new data protection law. This DPA will be empowered to regulate entities across sectors and is likely to determine the success of the data protection law in India.
Furthermore, the DPA must not only contend with the complexity of markets and the fast pace of technological change, but it must also address anticipated regulatory capacity deficits, low levels of user literacy, the number and diversity of enities within its regulatory ambit, and the need to secure individual privacy within and outside the digital realm.
Thus, looking ahead, we must account for the questions of governance that the forthcoming DPA is likely to face, as these will directly impact how entities and citizens engage with the DPA. In India, regulatory agencies adopt distinct choices to fulfil their functions. Regulators have also fared variably in ensuring transparent and accountable decision-making driven by demonstrable expertise. Even if the final form of the PDP Bill does not address these gaps, the DPA has the opportunity to integrate benchmarks and best practices as discussed above within its own governance framework from the get-go as it takes on its daunting responsibilities under the PDP Bill.
(The authors are Research Fellow, Law, Technology and Society Initiative and Project Lead, Regulatory Governance Project respectively at the National Law School of India University, Bangalore. Views are personal.)
This post was reviewed by Vipul Kharbanda and Shweta Mohandas
References
- For a discussion on distinct regulatory choices, please see TV Somanathan, The Administrative and Regulatory State in Sujit Choudhary, Madhav Khosla, et al. (eds), Oxford Handbook of the Indian Constitution (2016).
- On best practices for consultative law-making, see generally European Union Better Regulation Communication, Guidelines for Effective Regulatory Consultations (Canada), and OECD Best Practice Principles for Regulatory Policy: The Governance of Regulators, 2014.
[1] Personal Data Protection Bill 2019, § 50(3).
[2] Personal Data Protection Bill 2019, § 50(4).
[3] Personal Data Protection Bill 2019, § 51.
Launching CIS’s Flagship Report on Private Crypto-Assets
This event will serve as a venue to bring together the various stakeholders involved in the crypto-asset space to discuss the state of crypto-asset regulation in India from a multitude of perspectives.
This event will serve as a venue to bring together the various stakeholders involved in the crypto-asset space to discuss the state of crypto-asset regulation in India from a multitude of perspectives.
About the private crypto-assets report
The first output under this agenda is our report on regulating private cryptocurrencies in India. This report aims to act as an introductory resource for policymakers who are looking to implement a regulatory framework for private crypto-assets. The report covers the technical elements of crypto-assets, their history, proposed use cases as well as its benefits and limitations. It also examines how crypto-assets fit within India’s current regulatory and legislative frameworks and makes clear recommendations for the same.
About the Event
The launch event will feature an initial presentation by researchers at CIS on the various findings and recommendations of its flagship report. This will be followed by a moderated discussion with 5 panelists who represent the space in policy, academia and industry. The discussion will be centered around the current status of crypto-assets in India, the government’s new proposed regulations and what the future holds for the Indian crypto market.
The confirmed panelists are as follows:
- Tanvi Ratna - Founder, Policy 4.0 and expert on blockchain and cryptocurrencies
- Shehnaz Ahmed - Senior Resident Fellow and Fintech Lead at Vidhi Centre for Legal Policy
- Nithya R. - Chief Executive Officer, Unos.Finace
- Prashanth Irudayaraj - Head of R&D, Zebpay
- Vipul Kharbanda - Non resident Fellow specialising in Fintech at CIS
- Aman Nair - Policy Offer, CIS (Moderator)
Registration link: https://us06web.zoom.us/webinar/register/WN_TdY-EPLoRvGY2rfsq4CENw
Agenda
17.30 - 17.35 | Welcome Note |
17.35 - 18.35 |
The status of private crypto assets in India
|
18.35 - 19.00 | Audience questions and discussion |
Report on Regulation of Private Crypto-assets in India
Link to Annex 1: Excerpts from the public consultation comments received from Ripple
EXECUTIVE SUMMARY
As of May 2021, the crypto-asset market in India stood at USD 6.6 billion. With no signs of slowing down, crypto-assets have become an undeniable part of both Indian and global financial markets. In the face of this rapid growth, policymakers are faced with the critical task of developing a regulatory framework to govern private crypto-assets.
This report is an introductory resource for those who are looking to engage with the development of such a framework. It first provides an overview of the technical underpinnings of crypto-assets, their history, and their proposed use cases. It then examines how they fit within India’s current legislative and regulatory framework before the introduction of a dedicated crypto-asset law and how the government and its institutions have viewed crypto-assets so far. We present arguments for and against the adoption of private crypto-assets and compare the experiences of 11 other countries and jurisdictions. Finally, we offer specific and actionable recommendations to help policymakers develop a cohesive regulatory framework.
What are crypto-assets?
At their core, cryptocurrencies (CCs) or virtual currencies (VCs) are virtual monetary systems consisting of intangible ‘coins’ that use blockchain technology and serve a multitude of functions. While the word ‘cryptocurrency’ is often used as an umbrella term to describe various assets within the crypto-market, we note that these assets do not all share the same characteristics and often serve different functions. Therefore, for the purposes of this report, we use the term ‘crypto-assets’ rather than ‘cryptocurrencies’ when discussing the broad range of technologies within the crypto-marketplace.
Crypto-assets utilize a distributed ledger technology (DLT) known as blockchain technology. A blockchain is a complete ledger of all recorded transactions, which is created by combining individual blocks, each of which stores some information and is secured by a hash. Blockchain, by the very nature of its architecture, can be used to ensure decentralisation, authenticity, persistence, anonymity, and auditability.
History and proposed uses of crypto-assets
While other forms of crypto-assets have been proposed in the past, the modern conception of one can be traced to a research paper published under the pseudonym, Satoshi Nakamoto, which first proposed the idea of bitcoin. Bitcoin, as it was presented, seemingly solved the ‘double spending’ problem by utilising a form of DLT known as blockchain. Bitcoin, which was first operationalised on 3 January 2009, has since become the dominant crypto-asset globally – trading at over USD 57,000 per bitcoin.
Following the popularity of bitcoin, several alternatives (known as alt coins) were launched, the most popular of which is ethereum. According to CoinMarketCap, as of April 2021, there are over 9,500 traded cryptocurrencies in existence, with a total market capitalisation of over USD 2 trillion. The rise of bitcoin and other crypto-assets also led to the emergence of crypto-exchanges such as Binance. These exchanges act as platforms for users to buy, sell, and trade crypto-assets.
Many potential use cases for crypto-assets have been identified, including:
-
A method of payment
-
A tradeable asset
-
Initial coin offerings
-
Crypto-asset funds and derivatives
-
Crypto-asset-related services
Legal frameworks and private crypto-assets in India
While crypto-assets are also referred to as virtual currencies and cryptocurrencies, they do not currently satisfy the legal requirements to be considered as currency under Indian law. Although they have not yet been classified as a financial instrument, it is possible, through executive action, to include them within the definition of any of the following instruments: currency, foreign currency, derivative, collective investment scheme, or payment system. Such a move would give the government a legal basis to regulate the hitherto unregulated crypto-asset market, thereby bringing about much-needed stability and minimising the risk of fraudulent practices.
Understanding the case for private crypto-assets
This report examines both the benefits and limitations of crypto-assets across a number of their use cases.
-
Benefits of crypto-assets as a currency and asset:
-
Decentralised and verifiable transactions
-
Reduced transaction costs
-
Confidentiality
-
Security
-
Easier cross-border transactions
-
A potential tool for financial inclusion
-
As a tool for verifying asset ownership
-
Limitations of crypto-assets as a currency and asset:
-
High environmental costs
-
Replaces traditional transaction costs with new costs
-
A few actors dominate mining
-
Cannot replace traditional money
-
Introduces challenges in implementing monetary policies
-
Lack of network externalities
-
The limited actual impact on financial inclusion
-
Use for illegal activities
-
Prone to schemes and scams
International Perspectives
In order to draw inferences and lessons from a multitude of perspectives, we examined the regulatory frameworks governing private crypto-assets in the following jurisdictions:
-
European Union
-
El Salvador
-
United States
-
United Kingdom
-
Japan
-
Venezuela
-
South Africa
-
Singapore
-
Indonesia
-
Switzerland
-
China
Recommendations
Keeping in mind the benefits and limitations, as well as the experiences of countries around the world, we recommend the following measures to develop an appropriate regulatory framework in India. We have divided our recommendations into 2 types: immediate or short term measures and longer term measures.
-
Immediate/ Short Term Measures
-
Steering clear of bans private crypto-assets
Earlier, regulatory bodies made calls to ban private crypto-assets, but this resulted in crypto-assets being assimilated into the unregulated black market, thereby stifling potential innovation. To that end we recommend avoiding a ban, and adopting a regulatory approach instead.
-
Recommend that regulatory bodies use their ad-hoc power to exercise interim oversight
During the interim period, prior to the adoption of a dedicated crypto-asset legislation, crypto-assets could be included under one of the existing financial instrument categories. The regulations governing them would apply to both cryptocurrency exchanges as well as vendors who accept payments in cryptocurrencies.
-
Long Term Measures
-
Specific Regulatory Framework
There needs to be an independent regulatory framework specific to crypto-assets since the unique features of crypto-assets make them unsuitable to be regulated through the existing regulatory frameworks.
-
Identify clear definitions
Policymakers should adopt a definition of crypto-assets that includes entities that have emerged within the crypto space but which cannot be classified as ‘currencies’. They must also categorise and define these various entities as well as crypto-asset service providers.
-
Limit the scope of regulations to crypto-assets rather than their underlying technologies
Any proposed regulation must differentiate between the assets themselves and the technology underlying them. This would ensure that crypto-assets are not defined by the technology they currently use (i.e., DLT and blockchain) but by the purpose they serve.
-
Introduce a licensing and registration system
A licensing system, similar to those adopted in other jurisdictions such as the EU or New York, can be adopted to ensure that the state is able to effectively monitor crypto-related activities.
-
Make provisions for handling environmental concerns
A dedicated taxation programme and strict limitations on mining can minimise the environmental costs associated with crypto-assets.
-
Consumer protection measures
Any potential licensing system must include mandatory obligations for crypto-asset service providers that ensure that consumer rights are protected.
-
Taking measures to limit the impact of crypto-asset volatility on the wider financial market
Governments must take measures to ensure that the volatility of crypto-markets does not have a significant knock-on effect on the wider financial market. Such steps can include limiting financial institution holdings and dealings in crypto-assets.
-
Extending Anti Money Laundering/ Counter Financing of Terrorism norms and exchange control regulations
Given the anonymous nature of crypto-assets and their potential for use in illegal activities, we recommend ensuring that crypto-specific anti-money laundering, prohibition of terror financing and foreign exchange management rules are introduced.
-
Create an oversight body
Subject to the availability of resources, the government might consider establishing a dedicated body to oversee and research changes in the crypto-marketplace and make appropriate suggestions to the concerned regulatory authorities.
-
Taxation
The existing uncertainty with regard to the correct tax provisions to be applied for various transactions when dealing with crypto-assets needs to be clarified with specific amendments to the tax provisions.
-
Stablecoin Specific Regulation
Given the specific position occupied by stablecoins, and the unique role that they perform in the crypto-ecosystem, any legislation that seeks to regulate private crypto-assets must focus heavily on them. To that end, policymakers should pay special attention to identifying the various entities associated with stablecoins, applying greater regulatory scrutiny onto those entities and taking steps to limit the risk that stablecoins pose to the wider financial system.
Note
Online caste-hate speech: Pervasive discrimination and humiliation on social media
Download the research report, which includes a preface authored by Murali Shanmugavelan.
Executive summary
In India, religious texts, social customs, rituals, and everyday cultural practices legitimise the use of hate speech against marginalised caste groups. Notions of ‘purity’ of “upper-caste” groups, and conversely of ‘pollution’ of “lower-caste” groups, have made the latter subject to discrimination, violence, and dehumanisation. These dynamics invariably manifest online, with social media platforms becoming sites of caste discrimination and humiliation.
This report explores two research questions. First, what are the specific contours of caste-hate speech and abuse online? Semi-structured interviews with 12 scholars and activists belonging to DBA groups show that marginalised groups regularly face hate and harassment based on their caste. In addition to the overt hate, DBA individuals and groups are often targeted with abuse for availing reservations – a constitutionally mandated right. More covert forms of hate and abuse are also prevalent: trolls mix caste names and words from different languages together so that their comments appear meaningless to individuals who are not keenly aware of the local context.
Such hateful expression often emerges as a reaction from “upper-caste” groups to DBA resistance and social justice movements. Our respondents reported that the hateful expression can sometimes silence caste-marginalised groups and individuals, exclude them from conversations, and adversely impact their physical and mental wellbeing.
The second question we explore is how popular social media platforms and online spaces moderate caste-hate speech and abuse. We analysed the community guidelines, policies, and transparency reports of Facebook, Twitter, YouTube, and Clubhouse. We find that Facebook, Twitter, and Youtube incorporated ‘caste’ as a protected characteristic in their hate speech and harassment policies only in the last two or three years – many years after they entered Indian and South Asian markets — showing a disregard for the regional contexts of their users. Even after these policy changes, many platforms – whose forms for reporting harmful content list gender and race – still do not list caste.
Social media companies should radically increase their investment and capacity in understanding regional contexts and languages; they must focus on the dynamics of casteist hate and abuse. They will need to collaborate with a diverse set of DBA activists to ensure that their community guidelines effectively tackle overt, covert, and hyperlocal forms of caste-hate speech and abuse, and that their implementation and reporting processes match these policy commitments.
Download the research report, authored by Damni Kain, Shivangi Narayan, Torsha Sarkar and Gurshabad Grover, with a preface authored by Murali Shanmugavelan (Faculty Fellow – Race and Technology, Data and Society).
Call for respondents: the implementation of government-ordered censorship
Call for respondents
To study the implementation of online censorship and the experience of content creators, the Centre for Internet and Society is conducting interviews with people whose content has been affected by blocking orders from the Indian Government. We aim to empirically record the extent of government notice and opportunity for hearing made available to content creators.
If you, or someone you know, has had their content blocked or withheld by a blocking order, please reach out to us via email (divyansha[at]cis-india.org) or DM us on Twitter.
The type of content that can includes (but is not limited to):
-
blocking or withholding access of posts or accounts on social media
-
blocking or withholding access of websites by ISPs
-
search results that have been delisted by blocking orders
Please read below for a brief legal background on the powers of the Central Government to issue content takedown orders. If you have any concerns about the nature of attribution of your responses, please reach out: we are confident we will be able to find a solution that works for you.
Background
The rate of online censorship in India is increasing at an alarming rate, with the Government of India ordering around 10,000 webpages/social media accounts to be blocked just in 2020. The legal powers and procedures that enable such censorship thus deserve closer scrutiny. In particular, Section 69A of the Information Technology (IT) Act permits the Central Government to ask intermediaries (ranging from internet service providers to social media platforms) to block certain content for their users. Among other grounds, these powers can be used by the government in the interest of Indian sovereignty, national security, and public order.
The regulations (‘blocking rules’) issued under the Act lay down the procedure for the government to exercise such powers, and have long been criticised for enabling an opaque regime of online censorship. Such orders are passed by a committee comprising only government officials. There is no judicial or parliamentary oversight over such orders. The government does in certain instances have an obligation to find the content creator to give them a notice or hearing, but this has rarely been implemented.
To exacerbate this unaccountable form of censorship, there is a rule mandating the confidentiality of content takedown orders. This means that these orders are not public, severely impeding the ability to challenge broad censorship in courts. There are also cases where even individuals who created the affected content were not able to access the orders! Journalists, civil society organisations and activists are also hindered from probing how widespread India’s online censorship is, since the Government routinely rejects Right to Information (RTI) requests about these orders based on the confidentiality provision or national security grounds.
When this censorship regime was challenged in Shreya Singhal v. Union of India, the Supreme Court Court stated that the procedural safeguards were adequate, but such content takedown orders must always be open to challenge in court. Specifically, multiple legal scholars have read the judgment to mean a pre-decisional hearing must be afforded to the affected content creators.
Our forthcoming research project (described above) seeks to empirically investigate whether the Central Government is following this obligation.
What does the 2022 Finance Bill mean for crypto-assets in India?
The recent budget speech saw the Finance Minister propose a slew of measures that seek to clarify the taxation regime with regards to crypto-assets in India. The speech, and the proposed measures, have led to significant discussion and debate within the domestic crypto-ecosystem as questions continue to be raised about the ambiguous legality of crypto-assets in the absence of any dedicated crypto legislation. In the face of this uncertainty, this blog post looks to contextualise the proposals put forth by the Finance Minister in her speech and clarify what they mean for crypto-asset regulation and use in India.
Crypto-assets defined as a virtual digital asset and taxed at 30%
The 2022 Finance Bill, introduces the definition of a ‘virtual digital asset’ as an amendment to the 1961 Income Tax Act. The government defines a virtual digital asset as:
-
Any information or code or number or token (not being Indian currency or foreign currency), generated through cryptographic means or otherwise, by whatever name called, providing a digital representation of value exchanged with or without consideration, with the promise or representation of having inherent value, or functions as a store of value or a unit of account including its use in any financial transaction or investment, but not limited to investment scheme; and can be transferred, stored or traded electronically;
-
A non-fungible token or any other token of similar nature, by whatever name called;
-
Any other digital asset, as the Central Government may, by notification in the Official Gazette specify
Furthermore, the bill also introduces section 115BBH to the Income Tax Act, according to which income or profits generated from the transfer of ‘virtual digital assets’ would be taxed at the rate of 30%. The Finance Minister further clarified that any expenses incurred in carrying out such trades cannot be set-off or deducted from the profits generated, except the amount spent on buying the crypto-asset in the first place. Further in case of losses incurred from crypto-asset trading, such losses cannot be carried over to subsequent financial years.
While this clarification of the provisions relating to crypto-assets under the Income Tax Act, 1961 drew much attention for their potential impact, it is important to note that this measure is far from a departure from the government’s pre-existing stance. In responses to parliamentary questions on 30th November 2021 and 23rd March 2021, the Minister of Finance has repeatedly stressed the liability to pay taxes on any profits arising out of crypto trading under Indian tax law.
The budget speech merely clarified the provisions under which profits from crypto trading shall be taxed. Prior to this, there had been a fair amount of debate as to whether profits from crypto trading would be included as part of the regular income, income from other sources, or if they would be taxed as capital gains. This distinction and categorisation was critical as it determined the rate of tax applicable to crypto profits. However with the proposed section 115BBH, the government has made the taxation regime clearer on how these profits are to be taxed.
Introduction of TDS onto crypto-asset transactions and transfers
Another provision that this budget has proposed is the introduction of a 1% TDS (Tax Deducted at Source) on any transfer of a crypto-asset, provided that other conditions in relation to aggregate sales specified in the proposed section 194-S are satisfied. It must be noted that this TDS shall be payable not only on cash transfers, but even on trades where one cryptocurrency has been traded for another cryptocurrency. Thus trades where Bitcoin is bought using Tether would also be liable to such TDS deduction. Interestingly, the way the provision is currently drafted, if any person accepts payment for any goods or services in cryptocurrency, then such a person would be liable to pay TDS at 1%. This is because the Income Tax Act treats the cryptocurrency as the asset being bought or sold and treats the good or service being provided by the “seller” as the consideration. Thus instead of it being looked at as a transaction where one person is paying for something by using cryptocurrency, it is looked at as a transaction where the other person is buying the cryptocurrency and paying for it in kind (through the goods or services of the “seller”).
Questions of enforcement still remain
While these measures do bring a certain level of clarity and stability in the taxation regime with regard to crypto-assets, one still needs to grapple with the issue of their implementation. News reports suggest that about 15-20 percent of the investors in crypto assets are in the 18-20 year age group. A number of such investors do not file tax returns since they are mainly students investing their extra savings or “pocket money” to make a quick profit. Ensuring that this demographic actually follows the letter of the law may be a challenge for the revenue authorities and it would be interesting to see how they overcome it.
Submission to the Facebook Oversight Board: Policy on Cross-checks
Whether a cross-check system is needed?
Recommendation for the Board: The Board should investigate the cross-check system as part of Meta’s larger problems with algorithmically amplified speech, and how such speech gets moderated.
Explanation: The issues surrounding Meta’s cross-check system are not an isolated phenomena, but rather a reflection of the problems of algorithmically amplified speech, as well the lack of transparency in the company’s content moderation processes at large. At the outset, it must be stated that the majority of information on the cross-check system only became available after the media reports published by the Wall Street Journal. While these reports have been extensive in documenting various aspects of the system, there is no guarantee that the disclosures obtained by them provides the complete picture regarding the system. Further, given that Meta has been found to purposely mislead the Board and the public on how the cross-check system operates, it is worth investigating the incentives that necessitate the cross-check system in the first place.
Meta claims that the cross-check system works as a check for false positives: they “employ additional reviews for high-visibility content that may violate our policies.” Essentially they want to make sure that content that stays up on the platform and reaches a large audience, is following their content guidelines. However, previous disclosures have proven policy executives have prioritized the company’s ‘business interests’ over removing content that violates their policies; and have waited to act on known problematic content until significant external pressure was built up, including in India. In this context, the cross-check system seems less like a measure designed to protect users who might be exposed to problematic content, and more as a measure for managing public perception of the company.
Thus the Board should investigate both how content gains an audience on the platform, and how it gets moderated. Previous whistleblower disclosures have shown that the mechanics of algorithmically amplified speech, which prioritizes engagement and growth over safety, are easily taken advantage of by bad actors to promote their viewpoints through artificially induced virality. The cross-check system and other measures of content moderation at scale would not be needed if it was harder to spread problematic content on the platform in the first place. Instead of focusing only on one specific system, the Board needs to urge Meta to re-evaluate the incentives that drive content sharing on the platform and come up with ways that make the platform safer.
Meta’s Obligations under Human Rights Law
Recommendation for the Board: The Board must consider the cross-check system to be violative of Meta’s obligations under the International Covenant of Civil and Political Rights (ICCPR). Additionally, the cross-check ranker must be incorporated with Meta’s commitments towards human rights, as outlined in its Corporate Human Rights Policy.
Explanation: Meta’s content moderation, and by extension, its cross-check system, is bound by both international human rights law as well as the Board’s past decisions. At the outset, The system fails the three-pronged test of legality, legitimacy and necessity and proportionality, as delineated under Article 19(3) of the International Covenant of Civil and Political Rights (ICCPR). Firstly, this system has been “scattered throughout the company, without clear governance or ownership”, which violates the legality principle, since there is no clear guidance on what sort of speech, or which classes of users, would deserve the treatment of this system. Secondly, there is no understanding about the legitimacy of aims with which this system had been set up in the first place, beyond Meta’s own assertions, which have been countered by evidence to the contrary. Thirdly, the necessity and proportionality of the restriction has to be read along with the Rabat Plan of Action, which requires that for a statement to become a criminal offense, a six-pronged test of threshold is to be applied: a) the social and political context, b) the speaker’s position or status in the society, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm. As news reports have indicated, Meta has been utilizing the cross-check system to privilege speech from influential users, and in the process, have shielded inflammatory, inciting speech that would have otherwise qualified the Rabat threshold. As such, the third requirement is not fulfilled either.
Additionally, Meta’s own Corporate Human Rights Policy commits to respecting human rights in line with the UN Guiding Principles on Business and Human Rights (UNGPs). Therefore, the cross-check ranker must incorporate these existing commitments to human rights, including:
- The right to freedom of expression:, UN Special Rapporteur on freedom of opinion and expression report A/HRC/38/35 (2018); Joint Statement of international freedom of expression monitors on COVID-19 (March, 2020).
The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression addresses the regulation of user-generated online content.
The Joint Statement issued regarding Governmental promotion and protection of access to and free flow of information during the pandemic.
- The right to non-discrimination: International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), Articles 1 and 4.
Article 1 of the ICERD defines racial discrimination.
Article 4 of the ICERD condemns propaganda and organisations that attempt to justify discrimination or are based on the idea of racial supremacism.
- Participation in public affairs and the right to vote: ICCPR Article 25.
- The right to remedy: General Comment No. 31, Human Rights Committee (2004) (General Comment 31); UNGPs, Principle 22.
The General Comment discusses the nature of the general legal obligation imposed on State Parties to the Covenant.
Guiding Principle 22 states that where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes.
Meta’s obligations to avoid political bias and false positives in its cross-check system
Recommendation for the Board: The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to ensure that it is open about risks to user rights when there is involvement from the State in content moderation. Additionally, the Board must ask Meta to undertake a diversity and human rights audit of its existing policy teams, and commit to regular cultural training for its staff. Finally, the Board must investigate the potential conflicts of interest that arise when Meta’s policy team has any sort of nexus with political parties, and how that might impact content moderation.
Explanation: For the cross-check system to be free from biases, it is important for Meta to come clear to the Board regarding the rationale, standards and processes of the cross check review, and report on the relative error rates of determinations made through cross check compared with ordinary enforcement procedures. It also needs to disclose to the Board in which particular situations it uses the system and in which it does not. Principle 4 under the Foundational Principles of the Santa Clara Principles on Transparency and Accountability in Content Moderation encourage companies to realize the risk to user rights when there is involvement from the State in processes of content moderation and asks companies to makes users aware that: a) a state actor has requested/participated in an action on their content/account, and b) the company believes that the action was needed as per the relevant law. Users should be allowed access to any rules or policies, formal or informal work relationships that the company holds with state actors in terms of content regulation, the process of flagging accounts/content and state requests to action.
The Board must consider that erroneous lack of action (false positives) might not always be a system's flaw, but a larger, structural issue regarding how policy teams at Meta functions. As previous disclosures have proven, the contours of what sort of violating content gets to stay up on the platform has been ideologically and politically coloured, as policy executives have prioritized the company’s ‘business interests’ over social harmony. In such light, it is not sufficient to simply propose better transparency and accountability measures for Meta to adopt within its content moderation processes to avoid political bias. Rather, the Board’s recommendations must focus on the structural aspect of the human moderator and policy team that is behind these processes. The Board must ask Meta to a) urgently undertake a diversity and human rights audit of its existing team and its hiring processes, b) commit to regular training to ensure that their policy staffs are culturally literate in the socio-political regions they work in. Further, the Board must seriously investigate the potential conflicts of interest that happen when regional policy teams of Meta, with nexus to political parties, are also tasked with regulating content from representatives of these parties, and how that impacts the moderation processes at large.
Finally, in case decision 2021-001-FB-FBR, the Board made a number of recommendations to Meta which must be implemented in the current situation, including: a) considering the political context while looking at potential risks, b) employment of specialized staff in content moderation while evaluating political speech from influential users, c) familiarity with the political and linguistic context d) absence of any interference and undue influence, e) public explanation regarding the rules Meta uses when imposing sanctions against influential users and f) the sanctions being time-bound.
Transparency of the cross-check system
Recommendation for the Board: The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to increase the transparency of its cross-check system.
Explanation: There are ways in which Meta can increase the transparency of not only the cross-check system, but the content moderation process in general. The following recommendations draw from The Santa Clara Principles and the Board’s own previous decisions:
Considering Principle 2 of the Santa Clara Principles: Understandable Rules and Policies, Meta should ensure that the policies and rules governing moderation of content and user behaviors on Facebook are clear, easily understandable, and available in the languages in which the user operates.
Drawing from Principle 5 on Integrity and Explainability and from the Board’s recommendations in case decision 2021-001-FB-FBR which advises Meta to“Provide users with accessible information on how many violations, strikes and penalties have been assessed against them, and the consequences that will follow future violations”, Meta should be able to explain the content moderation decisions to users in all cases: when under review, when the decision has been made to leave the content up, or take it down. We recommend that Meta keeps a publicly accessible running tally of the number of moderation decisions made on a piece of content till date with their explanations. This would allow third parties (like journalists, activists, researchers and the OSB) to keep Facebook accountable when it does not follow its own policies, as has previously been the case.
In the same case decision, the Board has also previously recommended that Meta “Produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance, including how it applies to influential accounts. The company should also clearly explain the rationale, standards and processes of the cross-check review, and report on the relative error rates of determinations made through cross-checking compared with ordinary enforcement procedures.” Thus, Meta should publicly explain the cross check system in detail with examples, and make public the list of attributes that qualify a piece of content for secondary review.
The Operational Principles further provide actionable steps that Meta can take to improve the transparency of their content moderation systems. Drawing from Principle 2: Notice and Principle 3: Appeals, Meta should make a satisfactory appeals process available to users - whether they be decisions to leave up or takedown content. The appeals process should be handled by context aware teams. Meta should then publish the results of the cross check system and the appeals processes as part of their transparency reports including data like total content actioned, rate of success in appeals and cross check process, decisions overturned and preserved etc, which would also satisfy the first Operational Principle: Numbers.
Resources needed to improve the system for users and entities who do not post in English
Recommendations for the Board: The Board must urge Meta to urgently invest in resources to expand Meta’s content moderation services into the local contexts in which the company operates and invest in training data for local languages.
Explanation: The cross-check system is not a fundamentally different problem than content moderation. It has been shown time and time again that Meta’s handling of content from non-Western, non-English language contexts is severely lacking. It has been shown how content hosted on the platform has been used to inflame existing tensions in developing countries, promote religious hatred in India, genocide in Mynmar, and continue to support human traffickers and drug cartels on the platform even when these issues have been identified.
There is an urgent need to invest resources to expand Meta’s content moderation services into the local contexts in which the company operates. The company should make all policies and rule documents available in the languages of its users; invest in creating automated tools that are capable of flagging content that is not posted in English; and add people familiar with the local contexts to provide context aware second level reviews. The Facebook Files show that even according to company engineering, automated content moderation is still not very effective in identifying hate speech and other harmful content. Meta should focus on hiring, training and retaining human moderators who have knowledge of local contexts. Bias training of all content moderators, but especially those who will participate in the second level reviews in the cross check system is also extremely important to ensure acceptable decisions.
Additionally, in keeping with Meta’s human rights commitments, the company should develop and publish a policy for responding to human rights violations when they are pointed out by activists, researchers, journalists and employees as a matter of due process. It should not wait for a negative news cycle to stir them into action as it seems to have done in previous cases.
Benefits and limitations of automated technologies
Meta recently changed its moderation practice wherein it uses technology to prioritize content for human reviewers based on their severity index. Facebook has not specified the technology it uses to prioritize high-severity content but its research record shows that it uses a host of automated frameworks and tools to detect violating content, including image recognition tools, object detection tools, natural language processing models, speech models and reasoning models. One such model is the Whole Post Integrity Embeddings (“WPIE”) which can judge various elements in a given post (caption, comments, OCR, image etc.) to work out the context and the content of the post. Facebook also uses image matching models (SimSearchNet++) that are trained to match variations of an image with a high degree of precision and improved recall; multi-lingual masked language models on cross-lingual understanding such as XLM-R that can accurately identify hate-speech and other policy-violating content across a wide range of languages. More recently, Facebook introduced its machine translation model called the M2M-100 whose goal is to perform bidirectional translation between 7000 languages.
Despite the advances in this field, there are inherent limitations of such automated tools. Experts have repeatedly maintained that AI will get better at understanding context but it will not replace human moderators for the foreseeable future. One such instance where these limitations were exposed was during the COVID-19 pandemic, when Facebook sent its human moderators home - the number of removals flagged as hate speech on its platform more than doubled to 22.5 million in the second quarter of 2020 but the number of successful content appeals was dropped to 12,600 from the 2.3 million figure for the first three months of 2020.
The Facebook Files show that Meta’s AI cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes. Its automated systems are only capable of removing posts that generate just 3% to 5% of the views of hate speech on the platform and 0.6% of all content that violates Meta’s policies against violence and incitement. As such, it is difficult to accept the company’s claim that nearly all of the hate speech it takes down was discovered by AI before it was reported by users.
However, the benefits of such technology cannot be discounted, especially when one considers automated technology as a way of reducing trauma for human moderators. Using AI for prioritizing content for review can turn out to be effective for human moderators as it can increase their efficiency and reduce harmful effects of content moderation on them. Additionally, it can also limit the exposure of harmful content to internet users. Moreover, AI can also reduce the impact of harmful content on human moderators by allocating content to moderators on the basis of their exposure history. Theoretically, if the company’s claims are to be believed, using automated technology for prioritizing content for review can help to improve the mental health of Facebook’s human moderators.
Click to download the file here.
Notes for India as the digital trade juggernaut rolls on
The article by Arindrajit Basu was published in the Hindu on February 8, 2022
Despite the cancellation of the Twelfth Ministerial Conference (MC12) of the World Trade Organization (WTO) late last year (scheduled date, November 30, 2021-December 3, 2021) due to COVID-19, digital trade negotiations continue their ambitious march forward. On December 14, Australia, Japan, and Singapore, co-convenors of the plurilateral Joint Statement Initiative (JSI) on e-commerce, welcomed the ‘substantial progress’ made at the talks over the past three years and stated that they expected a convergence on more issues by the end of 2022.
Holding out
But therein lies the rub: even though JSI members account for over 90% of global trade, and the initiative welcomes newer entrants, over half of WTO members (largely from the developing world) continue to opt out of these negotiations. They fear being arm-twisted into accepting global rules that could etiolate domestic policymaking and economic growth. India and South Africa have led the resistance and been the JSI’s most vocal critics. India has thus far resisted pressures from the developed world to jump onto the JSI bandwagon, largely through coherent legal argumentation against the JSI and a long-term developmental vision. Yet, given the increasingly fragmented global trading landscape and the rising importance of the global digital economy, can India tailor its engagement with the WTO to better accommodate its economic and geopolitical interests?
Global rules on digital trade
The WTO emerged in a largely analogue world in 1994. It was only at the Second Ministerial Conference (1998) that members agreed on core rules for e-commerce regulation. A temporary moratorium was imposed on customs duties relating to the electronic transmission of goods and services. This moratorium has been renewed continuously, to consistent opposition from India and South Africa. They argue that the moratorium imposes significant costs on developing countries as they are unable to benefit from the revenue customs duties would bring.
The members also agreed to set up a work programme on e-commerce across four issue areas at the General Council: goods, services, intellectual property, and development. Frustrated by a lack of progress in the two decades that followed, 70 members brokered the JSI in December 2017 to initiate exploratory work on the trade-related aspects of e-commerce. Several countries, including developing countries, signed up in 2019 despite holding contrary views to most JSI members on key issues. Surprise entrants, China and Indonesia, argued that they sought to shape the rules from within the initiative rather than sitting on the sidelines.
India and South Africa have rightly pointed out that the JSI contravenes the WTO’s consensus-based framework, where every member has a voice and vote regardless of economic standing. Unlike the General Council Work Programme, which India and South Africa have attempted to revitalise in the past year, the JSI does not include all WTO members. For the process to be legally valid, the initiative must either build consensus or negotiate a plurilateral agreement outside the aegis of the WTO.
India and South Africa’s positioning strikes a chord at the heart of the global trading regime: how to balance the sovereign right of states to shape domestic policy with international obligations that would enable them to reap the benefits of a global trading system.
A contested regime
There are several issues upon which the developed and developing worlds disagree. One such issue concerns international rules relating to the free flow of data across borders. Several countries, both within and outside the JSI, have imposed data localisation mandates that compel corporations to store and process data within territorial borders. This is a key policy priority for India. Several payment card companies, including Mastercard and American Express, were prohibited from issuing new cards for failure to comply with a 2018 financial data localisation directive from the Reserve Bank of India. The Joint Parliamentary Committee (JPC) on data protection has recommended stringent localisation measures for sensitive personal data and critical personal data in India’s data protection legislation. However, for nations and industries in the developed world looking to access new digital markets, these restrictions impose unnecessary compliance costs, thus arguably hampering innovation and supposedly amounting to unfair protectionism.
There is a similar disagreement regarding domestic laws that mandate the disclosure of source codes. Developed countries believe that this hampers innovation, whereas developing countries believe it is essential for algorithmic transparency and fairness — which was another key recommendation of the JPC report in December 2021.
India’s choices
India’s global position is reinforced through narrative building by political and industrial leaders alike. Data sovereignty is championed as a means of resisting ‘data colonialism’, the exploitative economic practices and intensive lobbying of Silicon Valley companies. Policymaking for India’s digital economy is at a critical juncture. Surveillance reform, personal data protection, algorithmic governance, and non-personal data regulation must be galvanised through evidenced insights,and work for individuals, communities, and aspiring local businesses — not just established larger players.
Hastily signing trading obligations could reduce the space available to frame appropriate policy. But sitting out trade negotiations will mean that the digital trade juggernaut will continue unchecked, through mega-regional trading agreements such as the Regional Comprehensive Economic Partnership (RCEP) and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). India could risk becoming an unwitting standard-taker in an already fragmented trading regime and lose out on opportunities to shape these rules instead.
Alternatives exist; negotiations need not mean compromise. For example, exceptions to digital trade rules, such as ‘legitimate public policy objective’ or ‘essential security interests’, could be negotiated to preserve policymaking where needed while still acquiescing to the larger agreement. Further, any outcome need not be an all-or-nothing arrangement. Taking a cue from the Digital Economy Partnership Agreement (DEPA) between Singapore, Chile, and New Zealand, India can push for a framework where countries can pick and choose modules with which they wish to comply. These combinations can be amassed incrementally as emerging economies such as India work through domestic regulations.
Despite its failings, the WTO plays a critical role in global governance and is vital to India’s strategic interests. Negotiating without surrendering domestic policy-making holds the key to India’s digital future.
Arindrajit Basu is Research Lead at the Centre for Internet and Society, India. The views expressed are personal. The author would like to thank The Clean Copy for edits on a draft of this article.
CIS Comments and Recommendations on the Data Protection Bill, 2021
After nearly two years of deliberations and a few changes in its composition, the Joint Parliamentary Committee (JPC), on 17 December 2021, submitted its report on the Personal Data Protection Bill, 2019 (2019 Bill). The report also contains a new version of the law titled the Data Protection Bill, 2021 (2021 Bill). Although there were no major revisions from the previous version other than the inclusion of all data under the ambit of the bill, some provisions were amended.
This document is a revised version of the comments we provided on the 2019 Bill on 20 February 2020, with updates based on the amendments in the 2021 Bill. Through this document we aim to shed light on the issues that we highlighted in our previous comments that have not yet been addressed, along with additional comments on sections that have become more relevant since the pandemic began. In several instances our previous comments have either not been addressed or only partially been addressed; in such instances, we reiterate them.
These general comments should be read in conjunction with our previous recommendations for the reader to get a comprehensive overview of what has changed from the previous version and what has remained the same. This document can also be read while referencing the new Data Protection Bill 2021 and the JPC’s report to understand some of the significant provisions of the bill.
Read on to access the comments | Review and editing by Arindrajit Basu. Copy editing: The Clean Copy; Shared under Creative Commons Attribution 4.0 International license
How Function Of State May Limit Informed Consent: Examining Clause 12 Of The Data Protection Bill
The blog post was published in Medianama on February 18, 2022. This is the first of a two-part series by Amber Sinha.
In 2018, hours after the Committee of Experts led by Justice Srikrishna Committee released their report and draft bill, I wrote an opinion piece providing my quick take on what was good and bad about the bill. A section of my analysis focused on Clause 12 (then Clause 13) which provides for non-consensual processing of personal data for state functions. I called this provision a ‘carte-blanche’ which effectively allowed the state to process a citizen’s data for practically all interactions between them without having to deal with the inconvenience of seeking consent. My former colleague, Pranesh Prakash pointed out that this was not a correct interpretation of the provision as I had missed the significance of the word ‘necessary’ which was inserted to act as a check on the powers of the state. He also pointed out, correctly, that in its construction, this provision is equivalent to the position in European General Data Protection Regulation (Article 6 (i) (e)), and is perhaps even more restrictive.
While I agree with what Pranesh says above (his claims are largely factual, and there can be no basis for disagreement), my view of Clause 12 has not changed. While Clause 35 has been a focus of considerable discourse and analysis, for good reason, I continue to believe that Clause 12 remains among the most dangerous provisions of this bill, and I will try to unpack here, why.
The Data Protection Bill 2021 has a chapter on the grounds for processing personal data, and one of those grounds is consent by the individual. The rest of the grounds deal with various situations in which personal data can be processed without seeking consent from the individual. Clause 12 lays down one of the grounds. It allows the state to process data without the consent of the individual in the following cases —
a) where it is necessary to respond to a medical emergency
b) where it is necessary for state to provide a service or benefit to the individual
c) where it is necessary for the state to issue any certification, licence or permit
d) where it is necessary under any central or state legislation, or to comply with a judicial order
e) where it is necessary for any measures during an epidemic, outbreak or public health
f) where it is necessary for safety procedures during disaster or breakdown of public order
In order to carry out (b) and (c), there is also the added requirement that the state function must be authorised by law.
Twin restrictions in Clause 12
The use of the words ‘necessary’ and ‘authorised by law’ is intended to pose checks on the powers of the state. The first restriction seeks to limit actions to only those cases where the processing of personal data would be necessary for the exercise of the state function. This should mean that if the state function can be exercised without non-consensual processing of personal data, then it must be done so. Therefore, while acting under this provision, the state should only process my data if it needs to do so, to provide me with the service or benefit. The second restriction means that this would apply to only those state functions which are authorised by law, meaning only those functions which are supported by validly enacted legislation.
What we need to keep in mind regarding Clause 12 is that the requirement of ‘authorised by law’ does not mean that legislation must provide for that specific kind of data processing. It simply means that the larger state function must have legal backing. The danger is how these provisions may be used with broad mandates. If the activity in question is non-consensual collection and processing of, say, demographic data of citizens to create state resident hubs which will assist in the provision of services such as healthcare, housing, and other welfare functions; all that may be required is that the welfare functions are authorised by law.
Scope of privacy under Puttaswamy
It would be worthwhile, at this point, to delve into the nature of restrictions that the landmark Puttaswamy judgement discussed that the state can impose on privacy. The judgement clearly identifies the principles of informed consent and purpose limitation as central to informational privacy. As discussed repeatedly during the course of the hearings and in the judgement, privacy, like any other fundamental right, is not absolute. However, restrictions on the right must be reasonable in nature. In the case of Clause 12, the restrictions on privacy in the form of denial of informed consent need to be tested against a constitutional standard. In Puttaswamy, the bench was not required to provide a legal test to determine the extent and scope of the right to privacy, but they do provide sufficient guidance for us to contemplate how the limits and scope of the constitutional right to privacy could be determined in future cases.
The Puttaswamy judgement clearly states that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.” By locating the right not just in Article 21 but also in the entirety of Part III, the bench clearly requires that “the drill of various Articles to which the right relates must be scrupulously followed.” This means that where transgressions on privacy relate to different provisions in Part III, the different tests under those provisions will apply along with those in Article 21. For instance, where the restrictions relate to personal freedoms, the tests under both Article 19 (right to freedoms) and Article 21 (right to life and liberty) will apply.
In the case of Clause 12, the three tests laid down by Justice Chandrachud are most operative —
a) the existence of a “law”
b) a “legitimate State interest”
c) the requirement of “proportionality”.
The first test is already reflected in the use of the phrase ‘authorised by law’ in Clause 12. The test under Article 21 would imply that the function of the state should not merely be authorised by law, but that the law, in both its substance and procedure, must be ‘fair, just and reasonable.’ The next test is that of ‘legitimate state interest’. In its report, the Joint Parliamentary Committee places emphasis on Justice Chandrachud’s use of “allocation of resources for human development” in an illustrative list of legitimate state interests. The report claims that the ground, functions of the state, thus satisfies the legitimate state interest. We do not dispute this claim.
Proportionality and Clause 12
It is the final test of ‘proportionality’ articulated by the Puttaswamy judgement, which is most operative in this context. Unlike Clauses 42 and 43 which include the twin tests of necessity and proportionality, the committee has chosen to only employ one ground in Clause 12. Proportionality is a commonly employed ground in European jurisprudence and common law countries such as Canada and South Africa, and it is also an integral part of Indian jurisprudence. As commonly understood, the proportionality test consists of three parts —
a) the limiting measures must be carefully designed, or rationally connected, to the objective
b) they must impair the right as little as possible
c) the effects of the limiting measures must not be so severe on individual or group rights that the legitimate state interest, albeit important, is outweighed by the abridgement of rights.
The first test is similar to the test of proximity under Article 19. The test of ‘necessity’ in Clause 12 must be viewed in this context. It must be remembered that the test of necessity is not limited to only situations where it may not be possible to obtain consent while providing benefits. My reservations with the sufficiency of this standard stem from observations made in the report, as well as the relatively small amount of jurisprudence on this term in Indian law.
The Srikrishna Report interestingly mentions three kinds of scenarios where consent should not be required — where it is not appropriate, necessary, or relevant for processing. The report goes on to give an example of inappropriateness. In cases where data is being gathered to provide welfare services, there is an imbalance in power between the citizen and the state. Having made that observation, the committee inexplicably arrives at a conclusion that the response to this problem is to further erode the power available to citizens by removing the need for consent altogether under Clause 12. There is limited jurisprudence on the standard of ‘necessity’ under Indian law. The Supreme Court has articulated this test as ‘having reasonable relation to the object the legislation has in view.’ If we look elsewhere for guidance on how to read ‘necessity’, the ECHR in Handyside v United Kingdom held it to be neither “synonymous with indispensable” nor does it have the “flexibility of such expressions as admissible, ordinary, useful, reasonable or desirable.” In short, there must be a pressing social need to satisfy this ground.
However, the other two tests of proportionality do not find a mention in Clause 12 at all. There is no requirement of ‘narrow tailoring’, that the scope of non-consensual processing must impair the right as little as possible. It is doubly unfortunate that this test does not find a place, as unlike necessity, ‘narrow tailoring’ is a test well understood in Indian law. This means that while there is a requirement to show that processing personal data was necessary to provide a service or benefit, there is no requirement to process data in a way that there is minimal non-consensual processing. The fear is that as long as there is a reasonable relation between processing data and the object of the function of state, state authorities and other bodies authorised by it, do not need to bother with obtaining consent.
Similarly, the third test of proportionality is also not represented in this provision. It provides a test between the abridgement of individual rights and legitimate state interest in question, and it requires that the first must not outweigh the second. The absence of the proportionality test leaves Clause 12 devoid of any such consideration. Therefore, as long as the test of necessity is met under this law, it need not evaluate the denial of consent against the service or benefit that is being provided.
The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state, by setting the threshold to circumvent informed consent extremely low. In the next post, I will demonstrate the ease with which Clause 12 can allow indiscriminate data sharing by focusing on the Indian government’s digital healthcare schemes.
Clause 12 Of The Data Protection Bill And Digital Healthcare: A Case Study
The blog post was published in Medianama on February 21, 2022. This is the second in a two-part series by Amber Sinha.
In the previous post, I looked at provisions on non-consensual data processing for state functions under the most recent version of recommendations by the Joint Parliamentary Committee on India’s Data Protection Bill (DPB). The true impact of these provisions can only be appreciated in light of ongoing policy developments and real-life implications.
To appreciate the significance of the dilutions in Clause 12, let us consider the Indian state’s range of schemes promoting digital healthcare. In July 2018, NITI Aayog, a central government policy think tank in India released a strategy and approach paper (Strategy Paper) on the formulation of the National Health Stack which envisions the creation of a federated application programming interface (API)-enabled health information ecosystem. While the Ministry of Health and Family Welfare has focused on the creation of Electronic Health Records (EHR) Standards for India during the last few years and also identified a contractor for the creation of a centralised health information platform (IHIP), this Strategy Paper advocates a completely different approach, which is described as a Personal Health Records (PHR) framework. In 2021, the National Digital Health Mission (NDHM) was launched under which a citizen shall have the option to obtain a digital health ID. A digital health ID is a unique ID and will carry all health records of a person.
A Stack Model for Big Data Ecosystem in Healthcare
A stack model as envisaged in the Strategy Paper, consists of several layers of open APIs connected to each other, often tied together by a unique health identifier. The open nature of APIs has the advantage that it allows public and private actors to build solutions on top of it, which are interoperable with all parts of the stack. It is however worth considering both the ‘openness’ and the role that the state plays in it.
Even though the APIs are themselves open, they are a part of a pre-decided technological paradigm, built by private actors and blessed by the state. Even though innovators can build on it, the options available to them are limited by the information architecture created by the stack model. When such a technological paradigm is created for healthcare reform and health data, the stack model poses additional challenges. By tying the stack model to the unique identity, without appropriate processes in place for access control, siloed information, and encrypted communication, the stack model poses tremendous privacy and security concerns. The broad language under Clause 12 of the DPB needs to be looked at in this context.
Clause 12 allows non-consensual processing of personal data where it is necessary “for the performance of any function of the state authorised by law” in order to provide a service or benefit from the State. In the previous post, I had highlighted the import of the use of only ‘necessity’ to the exclusion of ‘proportionality’. Now, we need to consider its significance in light of the emerging digital healthcare apparatus being created by the state.
The National Health Stack and National Digital Health Mission together envision an intricate system of data collection and exchange which in a regulatory vacuum would ensure unfettered access to sensitive healthcare data for both the state and private actors registered with the platforms. The Stack framework relies on repositories where data may be accessed from multiple nodes within the system. Importantly, the Strategy Paper also envisions health data fiduciaries to facilitate consent-driven interaction between entities that generate the health data and entities that want to consume the health records for delivering services to the individual. The cast of characters involve the National Health Authority, health care providers and insurers who access the National Health Electronic Registries, unified data from different programmes such as National Health Resource Repository (NHRR), NIN database, NIC and the Registry of Hospitals in Network of Insurance (ROHINI), private actors such as Swasth, iSpirt who assist the Mission as volunteers. The currency that government and private actors are interested in is data.
The promised benefits of healthcare data in an anonymised and aggregate form range from Disease Surveillance to Pharmacovigilance as well as Health Schemes Management Systems and Nutrition Management, benefits which have only been more acutely emphasised during the pandemic. However, the pandemic has also normalised the sharing of sensitive healthcare data with a variety of actors, without much thinking on much-needed data minimisation practises.
The potential misuses of healthcare data include greater state surveillance and control, predatory and discriminatory practices by private actors which rely on Clause 12 to do away with even the pretense of informed consent so long as the processing of data is deemed necessary by the state and its private sector partners to provide any service or benefit.
Subclause (e) in Clause 12, which was added in the last version of the Bill drafted by MeitY and has been retained by the JPC, allows processing wherever it is necessary for ‘any measures’ to provide medical treatment or health services during an epidemic, outbreak or threat to public health. Yet again, the overly-broad language used here is designed to ensure that any annoyances of informed consent can be easily brushed aside wherever the state intends to take any measures under any scheme related to public health.
Effectively, how does the framework under Clause 12 alter the consent and purpose limitation model? Data protection laws introduce an element of control by tying purpose limitation to consent. Individuals provide consent to specified purposes, and data processors are required to respect that choice. Where there is no consent, the purposes of data processing are sought to be limited by the necessity principle in Clause 12. The state (or authorised parties) must be able to demonstrate necessity to the exercise of state function, and data must only be processed for those purposes which flow out of this necessity. However, unlike the consent model, this provides an opportunity to keep reinventing purposes for different state functions.
In the absence of a data protection law, data collected by one agency is shared indiscriminately with other agencies and used for multiple purposes beyond the purpose for which it was collected. The consent and purpose limitation model would have addressed this issue. But, by having a low threshold for non-consensual processing under Clause 12, this form of data processing is effectively being legitimised.
Nothing to Kid About – Children's Data Under the New Data Protection Bill
The article was originally published in the Indian Journal of Law and Technology
For children, the internet has shifted from being a form of entertainment to a medium to connect with friends and seek knowledge and education. However, each time they access the internet, data about them and their choices are inadvertently recorded by companies and unknown third parties. The growth of EdTech apps in India has led to growing concerns regarding children's data privacy. This has led to the creation of a self-regulatory body, the Indian EdTech Consortium. More recently, the Advertising Standard Council of India has also started looking at passing a draft regulation to keep a check on EdTech advertisements.
The Joint Parliamentary Committee (JPC), tasked with drafting and revising the Data Protection Bill, had to consider the number of changes that had happened after the release of the 2019 version of the Bill. While the most significant change was the removal of the term “personal data” from the title of the Bill, in a move to create a comprehensive Data Protection Bill that includes both personal and non personal data. Certain other provisions of the Bill also featured additions and removals. The JPC, in its revised version of the Bill has removed an entire class of data fiduciaries – guardian data fiduciary – which was tasked with greater responsibility for managing children's data. While the JPC justified the removal of the guardian data fiduciary stating that consent from the guardian of the child is enough to meet the end for which personal data of children are processed by the data fiduciary. While thought has been given to looking at how consent is given by the guardian on behalf of the child, there was no change in the age of children in the Bill. Keeping the age of consent under the Bill as the same as the age of majority to enter into a contract under the 1872 Indian Contract Act – 18 years – reveals the disconnect the law has with the ground reality of how children interact with the internet.
In the current state of affairs where Indian children are navigating the digital world on their own there is a need to look deeply at the processing of children’s data as well as ways to ensure that children have information about consent and informational privacy. By placing the onus of granting consent on parents, the PDP Bill fails to look at how consent works in a privacy policy–based consent model and how this, in turn, harms children in the long run.
1. Age of Consent
By setting the age of consent as 18 years under the Data Protection Bill, 2021, it brings all individuals under 18 years of age under one umbrella without making a distinction between the internet usage of a 5-year-old child and a 16-year-old teenager. There is a need to look at the current internet usage habits of children and assess whether requiring parental consent is reasonable or even practical. It is also pertinent to note that the law in the offline world does make the distinction between age and maturity. For example, it has been highlighted that Section 82 of the Indian Penal Code, read with Section 83, states that any act by a child under the age of 12 years shall not be considered an offence, while the maturity of those aged between 12–18 years will be decided by the court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industries, which would qualify them to fall under Section 13 of the Bill, which deals with employee data.
A 2019 report suggests that two-thirds of India’s internet users are in the 12–29 years age group, accounting for about 21.5% of the total internet usage in metro cities. With the emergence of cheaper phones equipped with faster processing and low internet data costs, children are no longer passive consumers of the internet. They have social media accounts and use several applications to interact with others and make purchases. There is a need to examine how children and teenagers interact with the internet as well as the practicality of requiring parental consent for the usage of applications.
Most applications that require age data request users to type in their date of birth; it is not difficult for a child to input a suitable date that would make it appear that they are over 18. In this case they are still children but the content that will be presented to them would be those that are meant for adults including content that might be disturbing or those involving use of alcohol and gambling. Additionally, in their privacy policies, applications sometimes state that they are not suited for and restricted from users under 18. Here, data fiduciaries avoid liability by placing the onus on the user to declare their age and properly read and understand the privacy policy.
Reservations about the age of consent under the Bill have also been highlighted by some members of the JPC through their dissenting opinions. MP Ritesh Pandey suggested that the age of consent should be reduced to 14 years keeping the best interest of the children in mind as well as to support children in benefiting from technological advances. Similarly, MP Manish Tiwari in his dissenting opinion suggested regulating data fiduciaries based on the type of content they provide or data they collect.
2. How is the 2021 Bill Different from the 2019 Bill?
The 2019 draft of the Bill consisted of a class of data fiduciaries called guardian data fiduciaries – entities that operate commercial websites or online services directed at children or which process large volumes of children’s personal data. This class of fiduciaries was barred from profiling, tracking, behavioural monitoring, and running targeted advertising directed at children and undertaking any other processing of personal data that can cause significant harm to the child. In the previous draft, such data fiduciaries were not allowed to engage in ‘profiling, tracking, behavioural monitoring of children, or direct targeted advertising at children’. There was also a prohibition on conducting any activities that might significantly harm the child. As per Chapter IV, any violation could attract a penalty of up to INR 15 crore of the worldwide turnover of the data fiduciary for the preceding financial year, whichever is higher. However, this separate class of data fiduciaries do not have any additional responsibilities. It is also unclear as to whether a data fiduciary that does not by definition fall within such a category would be allowed to engage in activities that could cause ‘significant harm’ to children.
The new Bill also does not provide any mechanisms for age verification and only lays down considerations that verification processes should be undertaken. Furthermore, the JPC has suggested that consent options available to the child when they attain the age of majority i.e. 18 years should be included within the rule frame by the Data Protection Authority instead of being an amendment in the Bill.
3. In the Absence of a Guardian Data Fiduciary
The 2018 and 2019 drafts of the PDP Bill consider a child to be any person below the age of 18 years. For a child to access online services, the data fiduciary must first verify the age of the child and obtain consent from their guardian. The Bill does not provide an explicit process for age verification apart from stating that regulations shall be drafted in this regard. The 2019 Bill states that the Data Protection Authority shall specify codes of practice in this matter. Taking best practices into account, there is a need for ‘user-friendly and privacy-protecting age verification techniques’ to encourage safe navigation across the internet. This will require looking at technological developments and different standards worldwide. There is a need to hold companies accountable for the protection of children’s online privacy and the harm that their algorithms cause children and to make sure that they are not continued.
The JPC in the 2021 version of the Bill removed provisions about guardian data fiduciaries, stating that there was no advantage in creating a different class of data fiduciary. As per the JPC, even those data fiduciaries that did not fall within the said classification would also need to comply with rules pertaining to the personal data of children i.e. with Section 16 of the Bill. Section 16 of the Bill requires the data fiduciary to verify the child’s age and obtain consent from the parent/guardian. The manner of age verification has also een spelt out. Furthermore, since ‘significant data fiduciaries’ is an existing class, there is still a need to comply with rules related to data processing. The JPC also removed the phrase “in the best interests of, the child” and “is in the best interests of, the child” under sub-clause 16(1), implying that the entire Bill concerned the rights of the data principal and the use of such terms dilutes the purpose of the legislation and could give way to manipulation by the data fiduciary.
Conclusion
Over the past two years, there has been a significant increase in applications that are targeted at children. There has been a proliferation of EduTech apps, which ideally should have more responsibility as they are processing children's data. We recommend that instead of creating a separate category, such fiduciaries collecting children's data or providing services to children be seen as ‘significant data fiduciaries’ that need to take up additional compliance measures.
Furthermore, any blanket prohibition on tracking children may obstruct safety measures that could be implemented by data fiduciaries. These fears are also increasing in other jurisdictions as there is a likelihood to restrict data fiduciaries from using software that looks out for such as Child Sexual Abuse Material as well as online predatory behaviour. Additionally, concerning the age of consent under the Bill, the JPC could look at international best practices and come up with ways to make sure that children can use the internet and have rights over their data, which would enable them to grow up with more awareness about data protection and privacy. One such example to look at could be the Children's Online Privacy Protection Rule (COPPA) in the US, where the rules apply to operators of websites and online services that collect personal information from kids under 13 or provide services to children that are directed at a general audience, but have actual knowledge that they collect personal information from such children. A form of combination of this system and the significant data fiduciary classification could be one possible way to ensure that children’s data and privacy are preserved online.
The authors are researchers at the Centre for Internet and Society and thank their colleague Arindrajit Basu for his inputs.
Response to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period
This submission presents a response by the Centre for Internet & Society (CIS) to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period (hereinafter, the “Consultation”) released in February 2022. CIS appreciates MeitY's consultations, and is grateful for the opportunity to put forth its views and comments.
Read the response here
Cybernorms: Do they matter IRL (In Real Life): Event Report
During the first half of the year, multilateral forums including the United Nations made some progress in identifying norms, rules, and principles to guide responsible state behaviour in cyberspace, even though the need for political compromise between opposing geopolitical blocs stymied progress to a certain extent.
There is certainly a need to formulate more concrete rules and norms. However, at the same time, the international community must assess the extent to which existing norms are being implemented by states and non-state actors alike. Applying agreed norms to "real life" throws up challenges of interpretation and enforcement, to which the only long-term solution remains regular dialogue and exchange both between states and other stakeholders.
This was the thinking behind the session titled "Cybernorms: Do They Hold Up IRL (in Real Life)?", organised at RightsCon 2021 by four non-governmental organisations: the Association for Progressive Communications (APC), the Centre for Internet & Society (CIS), Global Partners Digital (GPD), and Research ICT Africa (RIA). Cyber norms do not work unless states and other actors call out violations of norms, actively observe and implement them, and hold each other accountable. As the organisers of the event, we devised hypothetical scenarios based on three real-life examples of large-scale incidents and engaged with discussants who sought to apply agreed cyber norms to them. We chose to create scenarios without referring to real states as we wanted the discussion to focus on the implementation and interpretation of norms rather than the specific political situation of each actor.
Through this interactive exercise involving an array of expert stakeholders (including academics, civil society, the technical community, and governments) and communities from different regions, we sought to answer whether and how the application of cyber norms can mitigate harms, especially to vulnerable communities, and identify possible gaps in current normative frameworks. For each scenario, we aimed to diagnose whether cyber norms have been violated, and if so, what could and should be done, by identifying the next steps that can be taken by all the stakeholders present. For each scenario, we highlight why we chose it, outline the main points of discussion, and articulate key takeaways for norm implementation and interpretation. We hope this exercise will feed into future conversations around both norm creation and enforcement by serving as a framework for guiding optimal norm enforcement.
Read the full report here
CIS Seminar Series
The first seminar series was held on 7th and 8th October on the theme of ‘Information Disorder: Mis-, Dis- and Malinformation’,
Theme for the Second Seminar (to be held online)
Moderating Data, Moderating Lives: Debating visions of (automated) content moderation in the contemporary
Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (Gillespie, 2020) to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.
On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated (Gillespie, 2020), on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley (Morozov, 2013), or “tyrannical” as they erode existing democratic measures (Harari, 2018). Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (Siapera, 2022).
From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” (Arora, 2016).
The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.
We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.
Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:
- Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.
- From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation
- Negotiating the link between content moderation, censorship and surveillance in the global south
- Whose values decide what is and is not harmful?
- Challenges of building moderation models for under resourced languages.
- Content moderation, algorithmic governance and social relations.
- Communicating algorithmic governance on social media to the not so “tech-savvy” among us.
- Speculative horizons of content moderation and the future of social relations on the internet.
- Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.
- Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.
- What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.
- Maybe we should not automate: Alternative, bottom-up approaches to content moderation
Seminar Format
We are happy to welcome abstracts for one of two tracks:
Working paper presentation
A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.
Coffee-shop conversations
In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.
We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.
All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.
Please send your abstracts to [email protected].
Timeline
- Abstract Submission Deadline: 18th April
- Results of Abstract review: 25th April
- Full submissions (of draft papers): 25th May
- Seminar date: Tentative 31st May
References
Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. International Journal of Communication, 10(0), 19.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234. https://doi.org/10.1177/2053951720943234
Harari, Y. N. (2018, August 30). Why Technology Favors Tyranny. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism (First edition). PublicAffairs.
Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. International Journal of Bullying Prevention, 4(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7
Personal Data Protection Bill must examine data collection practices that emerged during pandemic
The article by Shweta Mohandas and Anamika Kundu was originally published by news nine on November 29, 2021.
The Personal Data Protection Bill (PDP) is speculated to be introduced during the winter session of the parliament soon, and the report of the Joint Parliamentary Committee (JPC) has already been adopted by the committee on Monday. The Report of the JPC comes after almost two years of deliberation and secrecy over how the final version of the Personal Data Protection Bill will be. Since the publication of the 2019 version of the PDP Bill, the Covid 19 pandemic and the public safety measures have opened the way for a number of new organisations and reasons to collect personal data that was non-existent in 2019. Hence along with changes that have been suggested by multiple civil society organisations, the dissent notes submitted by the members of the JPC, the new version of the PDP Bill must also look at how data processing has changed over the span of two years.
Concerns with the bill
At the outset there are certain parts of the PDP Bill which need to be revised in order to uphold the spirit of privacy and individual autonomy laid out in the Puttaswamy judgement. The two sections that need to be in line with the privacy judgement are the ones that allow for non consensual processing of data by the government, and by employers. The PDP Bill in its current form provides wide-ranging exemptions which allow government agencies to process citizen's data in order to fulfil its responsibilities.
In the 2018 version of bill, drafted by the Justice Srikrishna Committee exemptions granted to the State with regard to processing of data was subject to a four pronged test which required the processing to be (i) authorised by law; (ii) in accordance with the procedure laid down by the law; (iii) necessary; and (iv) proportionate to the interests being achieved. This four pronged test was in line with the principles laid down by the Supreme Court in the Puttaswamy judgement. The 2019 version of the PDP Bill has diluted this principle by merely retaining the 'necessity principle' and removing the other requirements which is not in consonance with the test laid down by the Supreme Court in Puttaswamy.
Section 35 was also widely discussed in the panel meetings where members had argued the removal of 'public order' as a ground for exemption. The panel also insisted for 'judicial or parliamentary oversight' to grant such exemptions. The final report did not accept these suggestions stating a need to balance national security, liberty and privacy of an individual. There ought to be prior judicial review of the written order exempting the governmental agency from any provisions of the bill. Allowing the government to claim an exemption if it is satisfied to be "necessary or expedient" can be misused.
Another clause which gives the data principal a wide berth is with respect to employee data Section 13 of the current version of the bill provides the employer with a leeway into processing employee data (other than sensitive personal data) without consent based on two grounds: when consent is not appropriate, or when obtaining consent would involve disproportionate effort on the part of the employer.
The personal data so collected can only be collected for recruitment, termination, attendance, provision of any service or benefit, and assessing performance. This covers almost all of the activities that require data of the employee. Although the 2019 version of the bill excludes non-consensual collection of sensitive personal data (a provision that was missing in the 2018 version of the bill), there is still a lot of scope to improve this provision and provide employees further right to their data. At the outset the bill does not define employee and employer, which could result in confusion as there is no one definition of these terms across Indian Labour Laws.
Additionally, the bill distinguishes between employee and consumer, where the consumer of the same company or service has a greater right to their data than an employee. In the sense that the consumer as a data principal has the option to use any other product or service and also has the right to withdraw consent at any time, in the case of an employee the consequence of refusing consent or withdrawing consent would be being terminated from the employment. It is understood that there is a requirement for employee data to be collected, and that consent does not work the same way as it does in the case of a consumer.
The bill could ensure that employers have some responsibility towards the data they collect from the employees, such as ensuring that they are only used for the purpose for which they were collected, the employee knows how long their data will be retained, and know if the data is being processed by third parties. It is also worth mentioning that the Indian government is India's largest employer spanning a variety of agencies and public enterprises.
Concerns highlighted by JPC Members
Going back to the few members of the JPC who have moved dissent notes, specifically with regard to governmental exemptions. Jairam Ramesh filed a dissent note, to which many other opposition members followed suit. While Jairam Ramesh praised the JPC's functioning, he disagreed with certain aspects of the Report. According to him, the 2019 bill is designed in a manner where the right to privacy is given importance only in cases of private activities. He raised concerns regarding the unbridled powers given to the government to exempt itself from any of the provisions.
The amendment suggested by him would require parliamentary approval before exemption would take place. He also added that Section 12 of the bill which provided certain scenarios where consent was not needed for processing of personal data should have been made 'less sweeping'. Similarly, Gaurav Gogoi's note stated that the exemptions would create a surveillance state and similarly criticised Section 12 and 35 of the bill. He also mentioned that there ought to be parliamentary oversight for the exemptions provided in the bill.
On the same issue, Congress leader Manish Tiwari noted that the bill creates 'parallel universes' - one for the private sector which needs to be compliant and the other for the State which can exempt itself. He has opposed the entire bill stating there exists an "inherent design flaw". He has raised specific objections to 37 clauses and stated that any blanket exemptions to the state goes against the Puttaswamy Judgement.
In their joint dissent note, Derek O'Brien and Mahua Mitra have said that there is a lack of adequate safeguards to protect the data principals' privacy and the lack of time and opportunity for stakeholder consultations. They have also pointed out that the independence of the DPA will cease to exist with the present provision of allowing the government powers to choose members and the chairman. Amar Patnaik is to object to the lack of inclusion of state level authorities in the bill. Without such bodies, he says, there would be federal override.
Conclusion
While a number of issues were highlighted by civil society, the members of the JPC, and the media, the new version of the bill should also need to take into account the shifts that have taken place in view of the pandemic. The new version of the data protection bill should take into consideration the changes and new data collection practices that have emerged during the pandemic, be comprehensive and leave very little provisions to be decided later by the Rules.
Comments to the draft Motor Vehicle Aggregators Scheme, 2021
CIS, established in Bengaluru in 2008 as a non-profit organisation, undertakes interdisciplinary research on internet and digital technologies from public policy andacademic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and regulatory practices around internet, technology,and society in India, and elsewhere.
CIS is grateful for the opportunity to submit its comments to the draft Scheme. Please find below our thematically organised comments.
Click here to read more.
Decoding India’s Central Bank Digital Currency (CBDC)
In her budget speech presented in the Parliament on 1 February 2022, the Finance Minister of India – Nirmala Sitharaman – announced that India will launch its own Central Bank Digital Currency (CBDC) from the financial year 2022–23. The lack of information regarding the Indian CBDC project has resulted in limited discussions in the public sphere. This article is an attempt to briefly discuss the basics of CBDCs such as the definition, necessity, risks, models, and associated technologies so as to shed more light on India’s CBDC project.
1. What is a CBDC?
Before delving into the various aspects of a CBDC, we must first define it. A CBDC in its simplest form has been described by the RBI as “the same as currency issued by a central bank but [which] takes a different form than paper (or polymer). It is sovereign currency in an electronic form and it would appear as liability (currency in circulation) on a central bank’s balance sheet. The underlying technology, form and use of a CBDC can be moulded for specific requirements. CBDCs should be exchangeable at par with cash.”
2. Policy Goals
Launching any CBDC involves the setting up of infrastructure, which comes with notable costs. It is therefore imperative that the CBDC provides significant advantages that can justify the investment it entails. Some of the major arguments in favour of CBDCs and their relevance in the Indian context are as follows.
Financial Inclusion: In countries with underdeveloped banking and payment systems, proponents believe that CBDCs can boost financial inclusion through the provision of basic accounts and an electronic payment system operated by the central bank. However, financial inclusion may not be a powerful motive in India, where at least one member in 99% of rural and urban households have a bank account, according to some surveys. Even the US Federal Reserve recognises that further research is needed to assess the potential of CBDCs to expand financial inclusion, especially among underserved and lower-income households.
Access to Payments: – It is claimed that CBDCs provide scope for improving the existing payments landscape by offering fast and efficient payment services to users. Further, supporters claim that a well-designed, robust, open CBDC platform could enable a wide variety of firms to compete to offer payment services. It could also enable them to innovate and generate new capabilities to meet the evolving needs of an increasingly digitalised economy. However, it is not yet clear exactly how CBDCs would achieve this objective and whether there would be any noticeable improvements in the payment systems space in India, which already boasts of a fairly advanced and well-developed payment systems market.
Increased System Resilience: Countries with a highly developed digital payments landscape are aware of their reliance on electronic payment systems. The operational resilience of these systems is of critical importance to the entire payments landscape. The CBDC would not only act as a backup to existing payment systems in case of an emergency but also reduce the credit risk and liquidity risk, i.e., the risk that payment system providers will turn insolvent and run out of liquidity. Such risks can also be mitigated through robust regulatory supervision of the entities in the payment systems space.
Increasing Competition: A CBDC has the potential to increase competition in the country’s payments sector in two main ways, (i) directly – by providing an alternative payment system that competes with existing private players, and (ii) by providing an open platform for private players, thereby reducing entry barriers for newer players offering more innovative services at lower costs.
Addressing Illicit Transactions: Cash offers a level of anonymity that is not always available with existing payment systems. If a CBDC offers the same level of anonymity as cash then it would pose a greater CFT/AML (Combating Financial Terrorism/ Anti-Money Laundering) risk. However, if appropriate CFT/AML requirements are built into the design of the CBDC, it could address some of the concerns regarding its usage in illegal transactions. Such CFT/AML requirements are already being followed by existing banks and payment systems providers.
Reduced Costs: If a CBDC is adopted to the extent that it begins to act as a substitute for cash, it could allow the central bank to print lesser currency, thereby saving costs on printing, transporting, storing, and distributing currency. Such a cost reduction is not exclusive to only CBDTs but can also be achieved through the widespread adoption of existing payment systems.
Reduction in Private Virtual Currencies (VCs): Central banks are of the view that a widely used CBDC will provide users with an alternative toexisting private cryptocurrencies and thereby limit various risks including credit risks, volatility risks, risk of fraud, etc. However if a CBDC does not offer the same level of anonymity or potential for high return on investment that is available with existing VCs, it may not be considered an attractive alternative.
Serving Future Needs: Several central banks see the potential for “programmable money” that can be used to conduct transactions automatically on the fulfilment of certain conditions, rules, or events. Such a feature may be used for automatic routing of tax payments to authorities at the point of sale, shares programmed to pay dividends directly to shareholders, etc. Specific programmable CBDCs can also be issued for certain types of payments such as toward subway fees, shared bike fees, or bus fares. This characteristic of CBDCs has huge potential in India in terms of delivery of various subsidies.
3. Potential Risks
As with most things, CBDCs have certain drawbacks and risks that need to be considered and mitigated in the designing phase itself. A successful and widely adopted CBDC could change the structure and functions of various stakeholders and institutions in the economy.
Both private and public sector banks rely on bank deposits to fund their loan activities. Since bank deposits offer a safe and risk-freeway to park one’s savings, a large number of people utilise this facility, thereby providing banks with a large pool of funds that is utilised for lending activities. A CBDC could offer the public a safer alternative to bank deposits since it eradicates even the minute risk of the bank becoming insolvent making it more secure than regular bank deposits. A widely accepted CBDC could adversely affect bank deposits, thereby reducing the availability of funds for lending by banks and adversely affecting credit facilities in the economy. Further, since a CBDC is a safer form of money, in times of stress, people may opt to convert funds stored in banks into safer CBDCs, which might cause a bank run. However, these issues can be mitigated by making the CBDC deposits non-interest-bearing, thus reducing their attractiveness as an alternative to bank deposits. Further, in times of monetary stress, the central bank could impose restrictions on the amount of bank money that can be converted into the CBDC, just as it has done in the case of cash withdrawals from specific banks when it finds that such banks are undergoing extreme financial stress.
If a significantly large portion of a country’s population adopts a private digital currency, it could seriously hamper the ability of the central bank to carry out several crucial functions, such as implementing the monetary policy, controlling inflation, etc.
It may be safe to say that the question of how CBDCs may affect the economy in general and more specifically, the central bank’s ability to implement monetary policy, seigniorage, financial stability, etc. requires further research and widespread consultation to mitigate any potential risk factors.
4. The Role of the Central Bank in a CBDC
The next issue that requires attention when dealing with CBDCs is the role and level of involvement of the central bank. This would depend not only on the number of additional functions that the central bank is comfortable adopting but also on the maturity of the fintech ecosystem in the country. Broadly speaking, there are three basic models concerning the role of the central bank in CBDCs:
(i) Unilateral CBDCs: Where the central bank performs all the functions right from issuing the CBDC to carrying out and verifying transactions and also dealing with the users by maintaining their accounts.
(ii) Hybrid or Intermediate Model: In this model, the CBDCs are issued by the central bank, but private firms carry out some of the other functions such as providing wallets to end users, verifying transactions, updating ledgers, etc. These private entities will be regulated by the central bank to ensure that there is sufficient supervision.
(iii) Synthetic CBDCs: In this model, the CBDC itself is not issued by the central bank but by private players. However, these CBDCs are backed by central bank liabilities, thus providing the sovereign stability that is the hallmark of a CBDC.
The mentioned models could also be modified to suit the needs of the economy; e.g., the second model could be modified by not only allowing private players to perform the user-facing functions, but also offering the same functions either by the central bank or even some other public sector enterprise. Such a scenario has the potential to offer services at a reduced price (perhaps with reduced functionalities) thereby fulfilling the financial inclusion and cost reduction policy goals mentioned above.
5. Role of Blockchain Technology
While it is true that the entire concept of a CBDC evolved from cryptocurrencies and that popular cryptocurrencies like Bitcoin and Ether are based on blockchain technology, recent research seems to suggest that blockchain may not necessarily be the default technology for a CBDC. Additionally, different jurisdictions have their own views on the merits and demerits of this technology, for example, the Bahamas and the Eastern Caribbean Central Bank have DLT-based systems; however, China has decided that DLT-based systems do not have adequate capacity to process transactions and store data to meet its system requirements.
Similarly, a project by the Massachusetts Institute of Technology (MIT) Currency Initiative and the Federal Reserve Bank of Boston titled “Project Hamilton” to explore the CBDC design space and its technical challenges and opportunities has surmised that a distributed ledger operating under the jurisdiction of different actors is not necessarily crucial. It was found that even if controlled by a single actor, the DLT architecture has downsides such as performance bottlenecks and significantly reduced transaction throughput scalability compared to other options.
6. Conclusion
Although a CBDC potentially offers some advantages, launching one is an expensive and complicated proposition, requiring in-depth research and detailed analyses of a large number of issues, only some of which have been highlighted here. Therefore, before launching a CBDC, central banks issue white papers and consult with the public in addition to major stakeholders, conduct pilot projects, etc. to ensure that the issue is analysed from all possible angles. Although the Reserve Bank of India is examining various issues such as whether the CBDC would be retail or wholesale, the validation mechanism, the underlying technology to be used, distribution architecture, degree of anonymity, etc., it has not yet released any consultation papers or confirmed the completion of any pilot programmes for the CBDC project.
It is, therefore, unclear whether there has been any detailed cost–benefit analysis by the government or the RBI regarding its feasibility and benefits over existing payment systems and whether such benefits justify the costs of investing in a CBDC. For example, several of the potential advantages discussed here, such as financial inclusion and improved payment systems may not be relevant in the Indian context, while others such as reduced costs and a reduction in illegal transactions may be achieved by improving the existing systems. It must be noted that the current system of distribution of central bank money has worked well over the years, and any systemic changes should be made only if the potential upside justifies such fundamental changes.
The Government of India has already announced the launch of the Indian CBDC in early 2023, but the lack of public consultation on such an important project is a matter of concern. The last time the RBI took a major decision in the crypto space without consulting stakeholders was when it banned financial institutions from having any dealings with crypto entities. On that occasion, the circular imposing the ban was struck down by the Supreme Court as violating the fundamental right to trade and profession. It is, therefore, imperative that the government and the Reserve Bank conduct wide-ranging consultations with experts and the public to conduct a detailed and thorough cost–benefit analysis to determine the feasibility of such a project before deciding on the launch of an Indian CBDC.
Response to the Pegasus Questionnaire issued by the SC Technical Committee
The questionnaire had 11 questions and the responses had to be submitted through an online form- which was available here. The last date for submitting the response was March 31, 2022. CIS had submitted the following responses to the questions in the questionnaire. Access the Response to the Questionnaire
Rethinking Acquisition of Digital Devices by Law Enforcement Agencies
Read the article originally published in RGNUL Student Research Review (RSRR) Journal
Abstract
The Criminal Procedure Code was created in the 1970s when the concept of the right to privacy was highly unacknowledged. Following the Puttuswamy I (2017) judgement of the Supreme Court affirming the right to privacy, these antiquated codes must be re-evaluated. Today, the police can acquire digital devices through summons and gain direct access to a person’s life, despite the summons mechanism having been intended for targeted, narrow enquiries. Once in possession of a device, the police attempt to circumvent the right against self-incrimination by demanding biometric passwords, arguing that the right does not cover biometric information . However, due to the extent of information available on digital devices, courts ought to be cautious and strive to limit the power of the police to compel such disclosures, taking into consideration the right to privacy judgement.
Keywords: Privacy, Criminal Procedural Law, CrPc, Constitutional Law
Introduction
New challenges confront the Indian criminal investigation framework, particularly in the context of law enforcement agencies (LEAs) acquiring digital devices and their passwords. Criminal procedure codes delimiting police authority and procedures were created before the widespread use of digital devices and are no longer pertinent to the modern age due to the magnitude of information available on a single device. A single device could provide more information to LEAs than a complete search of a person’s home; yet, the acquisition of a digital device is not treated with the severity and caution it deserves. Following the affirmation of the right to privacy in Puttuswamy I (2017), criminal procedure codes must be revamped, taking into consideration that the acquisition of a person’s digital device constitutes a major infringement on their right to privacy.
Acquisition of digital devices by LEAs through summons
Section 91 of the Criminal Procedure Code (CrPc) grants powers to a court or police officer in charge of a police station to compel a person to produce any form of document or ‘thing’ necessary and desirable to a criminal investigation. In Rama Krishna v State, ‘necessary’ and ‘desirable’ have been interpreted as any piece of evidence relevant to the investigation or a link in the chain of evidence. Abhinav Sekhri, a criminal law litigator and writer, has argued that the wide wording of this section allows summons to be directed towards the retrieval of specific digital devices.
As summons are target-specific, the section has minimal safeguards. However, several issues arise in the context of summons regarding digital devices. In the current day, access to a user’s personal device can provide comprehensive insight into their life and personality due to the vast amounts of private and personal information stored on it. In Riley v California, the Supreme Court of the United States (SCOTUS) observed that due to the nature of the content present on digital devices, summons for them are equivalent to a roving search, i.e., demanding the simultaneous production of all contents of the home, bank records, call records, and lockers. The Riley decision correctly highlights the need for courts to recognise that digital devices ought to be treated distinctly compared to other forms of physical evidence due to the repository of information stored on digital devices.
The burden the state must surpass in order to issue summons is low as the relevancy requirement is easily provable. As noted in Riley, police must identify which evidence on a device is relevant. Due to the sheer amount of data on phones, it is very easy for police to claim that there will surely be some form of connection between the content on the device and the case. Due to the wide range of offences available for Indian LEAs to cite, it is easy for them to argue that the content on the device is relevant to any number of possible offences. LEAs rarely face consequences for slamming the accused with a huge roster of charges – even if many of them are baseless – leading to the system being prone to abuse. The Indian Supreme Court in its judgement in Canara Bank noted that the burden of proof must be higher for LEAs when investigations violate the right to privacy. Tarun Krishnakumar notes that the trickle-down effect of Puttuswamy I will lead to new privacy challenges with regards to a summons to appear in court. Puttuswamy I, will provide the bedrock and constitutional framework, within which future challenges to the criminal process will be undertaken. It is important for the court to recognise the transformative potential within the Puttuswamy judgement to help ensure that the right to privacy of citizens is safeguarded. The colonial logic of policing – wherein criminal procedure law was merely a tool to maximise the interest of the state at the cost of the people – must be abandoned. Courts ought to devise a framework under Section 91 to ensure that summons are narrowly framed to target specific information or content within digital devices. Additionally, the digital device must be collected following a judicial authority issuing the summons and not a police authority. Prior judicial warrants will require LEAs to demonstrate their requirement for the digital device; on estimating the impact on privacy, the authority can issue a suitable summons. Currently, the only consideration is if the item will furnish evidence relevant to the investigation; however, judges ought to balance the need for the digital device in the LEA’s investigation with the users’ right to privacy, dignity, and autonomy.
Puttuswamy I provides a triple test encompassing legality, necessity, and proportionality to test privacy claims. Legality requires that the measure be prescribed by law, necessity analyses if it is the least restrictive means being adopted by the state, and proportionality checks if the objective pursued by the measure is proportional to the degree of infringement of the right. The relevance standard, as mentioned before, is inadequate as it does not provide enough safeguards against abuse. The police can issue summons based on the slightest of suspicions and thus get access to a digital device, following which they can conduct a roving enquiry of the device to find evidence of any other offence, unrelated to the original cause of suspicion.
Unilateral police summons of digital devices cannot pass the triple test as it is grossly disproportionate and lacks any form of safeguard against the police. The current system has no mechanism for overseeing the LEAs; as long as LEAs themselves are of the view that they require the device, they can acquire it. In Riley, SCOTUS has already held that warrantless seizure of digital devices constitutes a violation of the right to privacy. India ought to also adopt a requirement of a prior judicial warrant for the procurement of devices by LEAs. A re-imagined criminal process would have to abide by the triple test in particular proportionality wherein the benefit claimed by the state ought not to be disproportionate to the impact on the fundamental right to privacy; and further, a framework must be proposed to provide safeguards against abuse.
Compelling the production of passwords of devices
In police investigations, gaining possession of a physical device is merely the first step in acquiring the data on the device, as the LEAs still require the passcodes needed to unlock the device. LEAs compelling the production of passcodes to gain access to potentially incriminating data raises obvious questions regarding the right against self-incrimination; however, in the context of digital devices, several privacy issues may crop up as well.
In Kathi Kalu Oghad, the SC held that compelling the production of fingerprints of an accused person to compare them with fingerprints discovered by the LEA in the course of their investigation does not violate the right to protection against self-incrimination of the accused. It has been argued that the ratio in the judgement prohibits the compelling of disclosure of passwords and biometrics for unlocking devices because Kathi Kalu Oghad only dealt with the production of fingerprints in order to compare the fingerprints with pre-existing evidence, as opposed to unlocking new evidence by utilising the fingerprint. However, the judgement deals with self-incrimination and does not address any privacy issues.
The right against self-incrimination approach alone may not be enough to resolve all concerns. Firstly, there may be varying levels of protection provided to different forms of password protections on digital devices; text- and pattern-based passcodes are inarguably protected under Art. 20(3) of the Constitution. However, the protection of biometrics-based passcodes relies upon the correct interpretation of the Kathi Kalu Oghad precedent. Secondly, Art. 20(3) only protects the accused in investigations and not when non-accused digital devices are acquired by LEAs and the passcodes of the devices demanded.
Therefore, considering the aforementioned points, it is pertinent to remember that the right against self-incrimination does not exist in a vacuum separate from privacy. It originates from the concept of decisional autonomy – the right of individuals to make decisions about matters intimate to their life without interference from the state and society. Puttuswamy I observed that decisional autonomy is the bedrock of the right to privacy, as privacy allows an individual to make these intimate decisions away from the glare of society and/or the state. This has heightened importance in this context as interference with such autonomy could lead to the person in question facing criminal prosecution. The SC in Selvi v Karnataka and Puttuswamy I has repeatedly affirmed that the right against self-incrimination and the right to privacy are linked concepts, with the court observing that the right to remain silent is an integral aspect of decisional autonomy.
In Virendra Khanna, the Karnataka High Court (HC) dealt with the privacy and self-incrimination concerns caused by LEAs compelling the disclosure of passwords. The HC brushes aside concerns related to privacy by noting that the right to privacy is not absolute and that an exception to the right to privacy is state interest and protection of law and order (para 5.11), and that unlawful disclosure of material to third parties could be an actionable wrong (para 15). The court’s interpretation of privacy effectively provides a free pass for the police to interfere with the right to privacy under the pretext of a criminal investigation. This conception of privacy is inadequate as the issue of proportionality is avoided, and the court does not attempt to ensure that the interference is proportionate with the outcome.
US courts also see the compelling of production of passcodes as an issue of self-incrimination as well as privacy. In its judgement in Application for a Search Warrant, a US court observed that compelling the disclosure of passcodes existed at an intersection of the right to privacy and self-incrimination; the right against self-incrimination serves to protect the privacy interests of suspects.
Disclosure of passwords to digital devices amounts to an intrusion of the privacy of the suspect as the collective contents on the digital device effectively amount to providing LEAs with a method to observe a person’s mind and identity. Police investigative techniques cannot override fundamental rights and must respect the personal autonomy of suspects – particularly, the choice between silence and speech. Through the production of passwords, LEAs can effectively get a snapshot of a suspect’s mind. This is analogous to the polygraph and narco-analysis test struck down as unconstitutional by the SC in Selvi as it violates decisional autonomy.
As Sekhri noted, a criminal process that reflects the aspirations of the Puttuswamy judgement would require LEAs to first explain with reasonable detail the material which they wish to find in the digital devices. Secondly, they must provide a timeline for the investigation to ensure that individuals are not subjected to inexhaustible investigations with police roving through their devices indefinitely. Thirdly, such a criminal process must demand, a higher burden to be discharged from the state if the privacy of the individual is infringed upon. These aspirations should form the bedrock of a system of judicial warrants that LEAs ought to be required to comply with if they wish to compel the disclosure of passwords from individuals. The framework proposed above is similar to the Virendra Khanna guidelines, as they provide a system of checks and balances that ensure that the intrusion on privacy is carried out proportionately; additionally, it would require LEAs to show a real requirement to demand access to the device. The independent eyes of a judicial magistrate provide a mechanism of oversight and a check against abuse of power by LEAs.
Conclusion
The criminal law apparatus is the most coercive power available to the state, and, therefore, privacy rights will become meaningless unless they can withstand it. Several criminal procedures in the country are rooted in colonial statutes, where the rights of the populace being policed were never a consideration; hence, a radical shift is required. However, post-1947 and Puttuswamy, the ignorance and refusal to submit to the rights of the population can no longer be justified and significant reformulation is necessary to guarantee meaningful protections to device owners. There is a need to ensure that the rights of individuals are protected, especially when the motivation for their infringement is the supposed noble intentions of the criminal justice system. Failing to defend the right to privacy in these moments would be an invitation for allowing the power of the state to increase and inevitably become absolute.
CCTVs in Public Spaces and the Data Protection Bill, 2021
The article by Anamika Kundu and Digvijay S. Chaudhary was originally published by RGNUL Student Research Review on April 20, 2022
Introduction
In recent times, Indian cities have seen an expansion of state deployed CCTV cameras. According to a recent report, in terms of CCTVs deployed, Delhi was considered as the most surveilled city in the world, surpassing even the most surveilled cities in China. Delhi was not the only Indian city in that list, Chennai and Mumbai also made it to the list. In Hyderabad as well, the development of a Command and Control Centre aims to link the city’s surveillance infrastructure in real-time. Even though studies have shown that there is little correlation between CCTVs and crime control, deployment of CCTV cameras has been justified on the basis of national security and crime deterrence. Such an activity brings about the collection and retention of audio-visual/visual information of all individuals frequenting spaces where CCTV cameras are deployed. This information could be used to identify them (directly or indirectly) based on their looks or other attributes. Potential risks associated with the misuse, and processing of such personal data also arise. These risks include large scale profiling, criminal abuse (law enforcement misusing CCTV information for personal gains), and discriminatory targeting (law enforcement disproportionately focusing on a particular group of people). As these devices capture personal data of individuals, this article seeks data protection safeguards available to data principals against CCTV surveillance employed by the State in a public space under the proposed Data Protection Bill, 2021 (the “DPB”).
Safeguards Available Under the Data Protection Bill, 2021
To use CCTV surveillance, the measures and compliance listed under the DPB have to be followed. Obligations of data fiduciaries available under Chapter II, such as consent (clause 11), notice requirement (clause 7), and fair and reasonable processing (clause 5) are common to all data processing entities for a variety of activities. Similarly, as the DPB follows the principles of data minimisation (clause 6), storage limitation (clause 9), purpose limitation (clause 5), lawful and fair processing (clause 4), transparency (clause 23), and privacy by design (clause 22), these safeguards too are common to all data processing entities/activities. If a data fiduciary processes personal data of children, it has to comply with the standards stated under clause 16.
Under the DPB, compliance differs on the basis of grounds and purpose of data processing. As such, if compliance standards differ, so do the availability of safeguards under the DPB. Of relevance to this article, there are three standards of compliance under the DPB wherein the standards of safeguards available to a data principal differ. First, cases which would fall under Chapter III and hence, not require consent. Chapter III lists grounds for processing of personal data without consent. Second, cases which would fall under exemption clauses in Chapter VIII. In such cases, the DPB or some of its provisions would be inapplicable. Clause 35 under Chapter VIII gives power to the Central Government to exempt any agency from the application of the DPB. Similarly, Clause 36 under Chapter VIII, exempts certain provisions for certain processing of personal data. Third, cases which would not fall under either of the above Chapters. In such cases, all safeguards available under the DPB would be available to the data principals. Consequently, safeguards available to data principals in each of these standards are different. We will go through each of these separately.
First, if the grounds of processing of CCTV information is such that it falls under the scope of Chapter III of the DPB, wherein the consent requirement is done away with, then in those cases, the notice requirement has to reflect such purpose, meaning that even if consent is not necessary for certain cases, other requirements under the DPB would still apply. Here, we must note that CCTV deployment by the state on such a large scale may be justified on the basis of conditions stated under clauses 12 and 14 of DPB – specifically, the condition for the performance of state function authorised by law, and public interest. The requirement under clause 12 of “authorised by law” simply means that the state function should have legal backing. Deployment of CCTVs is most likely to fall under clause 12 as various states have enacted legislations providing for CCTV deployment in the name of public safety. As a result, even if section 12 takes away the requirement of consent for certain cases, data principals should be able to exercise all rights accorded to them under the DPB (chapter V) except the right to data portability under clause 19.
Second, processing of personal data via CCTVs by government agencies could be exempted from DPB under clause 35 for certain cases under the clause. Another exemption that is particularly concerning with regard to the use of CCTVs is the exemption provided under clause 36(a). Section 36(a) says that the provisions of chapters II-VII would not apply where the data is processed in the interest of prevention, detection, investigation, and prosecution of any offence under the law. Chapters II-VII govern the obligations of data fiduciaries, grounds where consent would not be required, personal data of children, rights of data principals, transparency and accountability measures, and restrictions on transfer of personal data outside India respectively. In these cases, the requirement of fair and reasonable processing under clause 5 would also not apply. As a broad justification provided for CCTVs deployment by the government is crime control, it is possible that section 36(a) justification can be used to exempt the processing of CCTV footage from the above-mentioned safeguards.
From the above discussion, the following can be concluded. First, if the grounds of processing fall under Chapter III, then standards of fair and reasonable processing, notice requirement, and all rights except the right to data portability u/s 19 would be available to data principals. Second, if the grounds of processing fall under clause 36, then, in that case, consent requirement, notice requirement, and the rights under DPB would be unavailable as that section mandates the non-application of those chapters. In such a case, even the processing requirements of a fair and reasonable manner stand suspended. Third, if the grounds of processing of CCTV information doesn’t fall under Chapter III, then all obligations listed under Chapter II would have to be followed. Moreover, the data principal would be able to exercise all the rights available under Chapter V of the DPB.
Constitutional Standards
When the Supreme Court recognised privacy as a fundamental right in the case of Puttaswamy v. Union of India (“Puttaswamy”), it located the principles of informed consent and purpose limitation as central to informational privacy. It recognised that privacy inheres not in spaces but in an individual. It also recognised that privacy is not an absolute right and certain restrictions may be imposed on the exercise of the right. Before listing the constitutional standards that activities infringing privacy must adhere to, it’s important to answer whether there exists a reasonable expectation of privacy in CCTV footage deployed in a public space by the State?
In Puttaswamy, the court recognised that privacy is not denuded in public spaces. Writing for the plurality judgement, Chandrachud J. recognised that the notion of a reasonable expectation of privacy has elements both of a subjective and objective nature. Defining these concepts, he writes, “Privacy at a subjective level is a reflection of those areas where an individual desire to be left alone. On an objective plane, privacy is defined by those constitutional values which shape the content of the protected zone where the individual ought to be left alone…hence while the individual is entitled to a zone of privacy, its extent is based not only on the subjective expectation of the individual but on an objective principle which defines a reasonable expectation.” Note how in the above sentences, the plurality judgement recognises “a reasonable expectation” to be inherent in “constitutional values”. This is important as the meaning of what’s reasonable is to be constituted according to constitutional values and not societal norms. A second consideration that the phrase “reasonable expectation of privacy” requires is that an individual’s reasonable expectation is allied to the purpose for which the information is provided, as held in the case of Hyderabad v. Canara Bank (“Canara Bank”). Finally, the third consideration in defining the phrase is that it is context dependent. For example, in the case of In the matter of an application by JR38 for Judicial Review (Northern Ireland) 242 (2015) (link here), the UK Supreme Court was faced with a scenario where the police published the CCTV footage of the appellant involved in riotous behaviour. The question before the court was: “Whether the publication of photographs by the police to identify a young person suspected of being involved in riotous behaviour and attempted criminal damage can ever be a necessary and proportionate interference with that person’s article 8 [privacy] rights?” The majority held that there was no reasonable expectation of privacy in the case because of the nature of the criminal activity the appellant was involved in. However, the majority’s formulation of this conclusion was based on the reasoning that “expectation of privacy” was dependent on the “identification” purpose of the police. The court stated, “Thus, if the photographs had been published for some reason other than identification, the position would have been different and might well have engaged his rights to respect for his private life within article 8.1”. Therefore, as the purpose of publishing the footage was “identification” of the wrongdoer, the reasonable expectation of privacy stood excluded. The Canara Bank case was relied on by the SC in Puttaswamy. The plurality judgement in Puttaswamy also quoted the above paragraphs from the UK Supreme Court judgement.
Finally, the SC in the Aadhaar case, laid down the factors of “reasonable expectation of privacy.” Relying on those factors, the Supreme Court observed that demographic information and photographs do not raise a reasonable expectation of privacy. It further held that face photographs for the purpose of identification are not covered by a reasonable expectation of privacy. As this author has recognised, the majority in the Aadhaar case misconstrued the “reasonable expectation of privacy” to lie not in constitutional values as held in Puttaswamy but in societal norms. Even with the misapplication of the Puttaswamy principles by the majority in Aadhaar, it is clear that the exclusion of a “reasonable expectation of privacy” in face photographs is valid only for the purpose of “identification”. For purposes other than “identification”, there should exist a reasonable expectation of privacy in CCTV footage. Having recognised the existence of “reasonable expectation of privacy” in CCTV footage, let’s see how the safeguards mentioned under the DPB stand the constitutional standards of privacy laid down in Puttaswamy.
The bench in Puttaswamy located privacy not only in Article 21 but the entirety of part III of the Indian Constitution. Where transgression to privacy relates to different provisions under Part III, the tests evolved under those Articles would apply. Puttaswamy recognised that national security and crime control are legitimate state objectives. However, it also recognised that any limitation on the right must satisfy the proportionality test. The proportionality test requires a legitimate state aim, rational nexus, necessity, and balancing of interests. Infringement on the right to privacy occurs under the first and second standard. The first requirement of proportionality stands justified as national security and crime control have been recognised to be legitimate state objectives. However, it must be noted that the EU Guidelines on Processing of Personal Data through video devices state that the mere purpose of “safety” or “for your safety” is not sufficiently specific and is contrary to the principle that personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject. The second requirement is a rational nexus. As stated above, there is little correlation between crime control and surveillance measures. Even if the state justifies a rational nexus between state aim and the action employed, it is the necessity part of the proportionality test where the CCTV surveillance measures fail (as explained by this author). Necessity requires us to draw a list of alternatives and their impact on an individual, and then do a balancing analysis with regard to the alternatives. Here, judicial scrutiny of the exemption order under clause 35 is a viable alternative that respects individual rights while at the same time, not interfering with the state’s aim.
Conclusion
Informed consent and purpose limitation were stated to be central principles of informational privacy in Puttaswamy. Among the three standards we identified, the principles of informed consent and purpose limitation remain available only in the third standard. In the first standard, even though the requirement of consent has become unavailable, the principle of purpose limitation would still be applicable to the processing of such data. The second standard is of particular concern wherein neither of those principles is available to data principals. It is worth mentioning here that in large scale monitoring activities such as CCTV surveillance, the safeguards which the DPB lists out would inevitably have an implementation flaw. The reason is that in scenarios where individuals refuse consent for large scale CCTV monitoring, what alternatives would the government offer to those individuals? Practically, CCTV surveillance would fall under clause 12 standards where consent would not be required. Even in those cases, would the notice requirement safeguard be diminished to “you are under surveillance” notices? When we talk about exercise of rights available under the DPB, how would an individual effectively exercise their right when the data processing is not limited to a particular individual? These questions arise because the safeguards under the DPB (and data protection laws in general) are based on individualistic notions of privacy. Interestingly, individual use cases of CCTVs have also increased with an increase in state use of CCTVs. Deployment of CCTVs for personal or domestic purposes would be exempt from the above-mentioned compliances as that would fall under the exemption provision of clause 36(d). Two additional concerns arise in relation to processing of data concerning CCTVs – the JPC report’s inclusion of Non-Personal Data (“NPD”) within the ambit of DPB, and the government’s plan to develop a National Automated Facial Recognition System (“AFRS”). A significant part of the data collected by CCTVs would fall within the ambit of NPD.With the JPC’s recommendation, it will be interesting to follow the processing standards for NPD under the DPB. AFRS has been imagined as a national database of photographs gathered from various agencies to be used in conjunction with facial recognition technology. The use of facial recognition technology with CCTV cameras raises concerns surrounding biometric data, and risks of large scale profiling. Indeed, section 27 of the DPB reflects this risk and mandates a data protection impact assessment to be undertaken by the data fiduciary with respect to processing involving new technologies or large scale profiling or use of biometric data by such technologies, however the DPB does not define what “new technology” means. Concerns around biometric data are outside the scope of the present article, however, it would be interesting to look at how the use of facial recognition technology with CCTVs could impact the safeguards under DPB.
Comments to the Draft National Health Data Management Policy 2.0
This is a joint submission on behalf of (i) Access Now, (ii) Article 21, (iii) Centre for New Economic Studies, (iv) Center for Internet and Society, (v) Internet Freedom Foundation, (vi) Centre for Justice, Law and Society at Jindal Global Law School, (vii) Priyam Lizmary Cherian, Advocate, High Court of Delhi (ix) Swasti-Health Catalyst, (x) Population Fund of India.
At the outset, we would like to thank the National Health Authority (NHA) for inviting public comments on the draft version of the National Health Data Management Policy 2.0 (NDHMPolicy 2.0) (Policy) We have not provided comments to each section/clause, but have instead highlighted specific broad concerns which we believe are essential to be addressed prior tothe launch of NDHM Policy 2.0.
Read on to view the full submission here
Issue Brief_Regulating Crypto-asset advertising in India
CC Edited_Comparing advertising standards for crypto_TCC.docx.pdf
—
PDF document,
310 kB (317993 bytes)
CIS Issue Brief on regulating Crypto-asset advertising in India
Over the past decade, crypto-assets have established themselves within the digital global zeitgeist. Crypto-asset (alternatively referred to as cryptocurrency) trading and investments continue to skyrocket, with centralised crypto exchanges seeing upwards of USD 14 trillion (or around INR 1086 trillion) in trading volume.
One of the key elements behind this exponential growth and embedding of crypto-assets into the global cultural consciousness has been the marketing and advertising efforts of crypto-asset providers and crypto-asset-related service providers.In India alone, crypto-exchange advertisements have permeated into all forms of media and seem to be increasing as the market continues to mature. At the same time, however, financial regulators such as the RBI have consistently pointed out concerns associated with crypto-assets, even going so far as to warn consumers and investors of the dangers that may arise from investing in crypto-assets through a multitude of circulars.
In light of this, we analyse the regulations governing crypto-assets in India by examining the potential and actual limitations posed by them. We then compare them with the regulations governing the advertising of another financial instrument, mutual funds. Finally, we perform a comparative analysis of crypto-asset advertising regulations in four jurisdictions - The EU, Singapore, Spain and the United Kingdom- and identify clear and actionable recommendations that policymakers can implement to ensure the safety and fairness of crypto-asset advertising in India.
The full issue brief can be accessed Here
Making Voices Heard
We believe that voice interfaces have the potential to democratise the use of the internet by addressing limitations related to reading and writing on digital text-only platforms and devices. This report examines the current landscape of voice interfaces in India, with a focus on concerns related to privacy and data protection, linguistic barriers, and accessibility for persons with disabilities (PwDs).
The report features a visual mapping of 23 voice interfaces and technologies publicly available in India, along with a literature survey, a policy brief towards development and use of voice interfaces and a design brief documenting best practices and users’ needs, both with a focus on privacy, languages, and accessibility considerations, and a set of case studies on three voice technology platforms. Read and download the full report here
Credits
Research: Shweta Mohandas, Saumyaa Naidu, Deepika Nandagudi Srinivasa, Divya Pinheiro, and Sweta Bisht.
Conceptualisation, Planning, and Research Inputs: Sumandro Chattapadhyay, and Puthiya Purayil Sneha.
Illustration: Kruthika NS (Instagram @theworkplacedoodler). Website Design Saumyaa Naidu. Website Development Sumandro Chattapadhyay, and Pranav M Bidare.
Review and Editing: Puthiya Purayil Sneha, Divyank Katira, Pranav M Bidare, Torsha Sarkar, Pallavi Bedi, and Divya Pinheiro.
Copy Editing: The Clean Copy
Working paper on Non-Financial Use Cases of Blockchain Technology
Ever since its initial conceptualisation in 2009, blockchain technology has been synonymous with financial products and services - most notably crypto-assets like Bitcoin. However, while often associated with the financial sector, blockchain technology represents an opportunity for multiple industries to reinvent and improve their legacy processes. In India, the 2020 discussion Paper on Blockchain Technology by the Niti Aayog as well as the National Blockchain Strategy of 2021 by the Ministry of Electronics and Information Technology have attempted to articulate this opportunity. These documents examine the potential benefits that would arise from blockchain’s introduction across multiple non financial sectors.
This working paper looks to examine three specific use cases mentioned in the above mentioned government documents: Land record management, certification verification and pharmaceutical supply chain management. We look to provide an overview of what blockchain technology is and document the ongoing attempts to integrate blockchain technology into the aforementioned fields. We also assess the possible costs and benefits associated with blockchain’s introduction and look to draw insights from instances of such integration in other jurisdictions.
The full working paper can be found here.
The Government’s Increased Focus on Regulating Non-Personal Data: A Look at the Draft National Data Governance Framework Policy
Introduction
Non Personal Data (‘NPD’) can be understood as any information not relating to an identified or identifiable natural person. The origin of such data can be both human and non-human. Human NPD would be such data which has been anonymised in such a way that the person to whom the data relates cannot be re-identified. Non-human NPD would mean any such data that did not relate to a human being in the first place, for example, weather data. There has been a gradual demonstrated interest in NPD by the government in recent times. This new focus on regulating non personal data can be owed to the economic incentive it provides. In its report, the Sri Krishna committee, released in 2018 agreed that NPD holds considerable strategic or economic interest for the nation, however, it left the questions surrounding NPD to a future committee.
History of NPD Regulation
In 2020, the Ministry of Electronics and Information Technology (‘MEITY’) constituted an expert committee (‘NPD Committee’) to study various issues relating to NPD and to make suggestions on the regulation of non-personal data. The NPD Committee differentiated NPD into human and non-human NPD, based on the data’s origin. Human NPD would include all information that has been stripped of any personally identifiable information and non-human NPD meant any information that did not contain any personally identifiable information in the first place (eg. weather data). The final report of the NPD Committee is awaited but the Committee came out with a revised draft of its recommendations in December 2020. In its December 2020 report, the NPD Committee proposed the creation of a National Data Protection Authority (‘NPDA’) as it felt this is a new and emerging area of regulation. Thereafter, the Joint Parliamentary Committee on the Personal Data Protection Bill, 2019 (‘JPC’) came out with its version of the Data Protection Bill where it amended the short title of the PDP Bill 2019 to Data Protection Bill, 2021 widening the ambit of the Bill to include all types of data. The JPC report focuses only on human NPD, noting that non-personal data is essentially derived from one of the three sets of data - personal data, sensitive personal data, critical personal data - which is either anonymized or is in some way converted into non-re-identifiable data.
On February 21, 2022, the Ministry of Electronics and Information Technology (‘MEITY’) came out with the Draft India Data Accessibility and Use Policy, 2022 (‘Draft Policy’). The Draft Policy was strongly criticised mainly due to its aims to monetise data through its sale and licensing to body corporates. The Draft Policy had stated that anonymised and non-personal data collected by the State that has “undergone value addition” could be sold for an “appropriate price”. During the Draft Policy’s consultation process, it had been withdrawn several times and then finally removed from the website. The National Data Governance Framework Policy (‘NDGF Policy’) is a successor to this Draft Policy. There is a change in the language put forth in the NDGF Policy from the Draft Policy, where the latter mainly focused on monetary growth. The new NDGF Policy aims to regulate anonymised non-personal data (‘NPD’) kept with governmental authorities and make it accessible for research and improving governance. It wishes to create an ‘India Datasets programme’ which will consist of the aforementioned datasets. While MEITY has opened the draft for public comments, is a need to spell out the procedure in some ways for stakeholders to draft recommendations for the NDGF policies in an informed manner. Through this piece, we discuss the NDGF Policy in terms of issues related to the absence of a comprehensive Data Protection Framework in India and the jurisdictional overlap of authorities under the NDGF Policy and DPB.
What the National Data Governance Framework Policy Says
Presently in India, NPD is stored in a variety of governmental departments and bodies. It is difficult to access and use this stored data for governmental functions without modernising collection and management of governmental data. Through the NDGF Policy, the government aims to build an Indian data storehouse of anonymised non-personal datasets and make it accessible for both improving governance and encouraging research. It imagines the establishment of an Indian Data Office (‘IDO’) set up by MEITY , which shall be responsible for consolidating data access and sharing of non-personal data across the government. In addition, it also mandates a Data Management Unit for every Ministry/department that would work closely with the IDO. IDO will also be responsible for issuing protocols for sharing NPD. The policy further imagines an Indian Data Council (‘IDC’) whose function would be to define frameworks for important datasets, finalise data standards, and Metadata standards and also review the implementation of the policy. The NDGF Policy has provided a broad structure concerning the setting up of anonymisation standards, data retention policies, data quality, and data sharing toolkit. The NDGF Policy states that these standards shall be developed and notified by the IDO or MEITY or the Ministry in question and need to be adhered to by all entities.
The Data Protection Framework in India
The report adopted by the JPC, felt that it is simpler to enact a single law and a single regulator to oversee all the data that originates from any data principal and is in the custody of any data fiduciary. According to the JPC, the draft Bill deals with various kinds of data at various levels of security. The JPC also recommended that since the Data Protection Bill (‘DPB’) will handle both personal and non-personal data, any further policy / legal framework on non-personal data may be made a part of the same enactment instead of any separate legislation. The draft DPB states that what is to be done with the NDP shall be decided by the government from time to time according to its policy. As such, neither the DPB, 2021 nor the NDGF Policy go into details of regulating NPD but only provide a broad structure of facilitating free-flow of NPD, without taking into account the specific concerns that have been raised since the NPD committee came out with its draft report on regulating NPD dated December 2020.
Jurisdictional overlaps among authorities and other concerns
Under the NDGF policy, all guidelines and rules shall be published by a body known as the Indian Data Management Office (‘IDMO’). The IDMO is set to function under the MEITY and work with the Central government, state governments and other stakeholders to set standards. Currently, there is no sign of when the DPB will be passed as law. According to the JPC, the reason for including NPD within the DPB was because of the impossibility to differentiate between PD and NPD. There are also certain overlaps between the DPB and the NDGF which are not discussed by the NDGF. NDGF does not discuss the overlap between the IDMO and Data Protection Authority (‘DPA’) established under the DPB 2021.
Under the DPB, the DPA is tasked with specifying codes of practice under clause 49. On the other hand, the NDGF has imagined the setting up of IDO, IDMO, and the IDC, which shall be responsible for issuing codes of practice such as data retention, and data anonymisation, and data quality standards. As such, there appears to be some overlap in the functions of the to-be-constituted DPA and the NDGF Policy.
Furthermore, while the NDGF Policy aims to promote openness with respect to government data, there is a conflict with open government data (‘OGD’) principles when there is a price attached to such data. OGD is data which is collected and processed by the government for free use, reuse and distribution. Any database created by the government must be publicly accessible to ensure compliance with the OGD principles.
Conclusion
Streamlining datasets across different authorities is a huge challenge for the government and hence the NGDF policy in its current draft requires a lot of clarification. The government can take inspiration from the European Union which in 2018, came out with a principles-based approach coupled with self-regulation on the framework of the free flow of non-personal data. The guidance on the free-flow of non-personal data defines non-personal data based on the origin of data - data which originally did not relate to any personal data (non-human NPD) and data which originated from personal data but was subsequently anonymised (human NPD). The regulation further realises the reality of mixed data sets and regulates only the non-personal part of such datasets and where the datasets are inextricably linked, the GDPR would apply to such datasets. Moreover, any policy that seeks to govern the free flow of NPD ought to make it clear that in case of re-identification of anonymised data, such re-identified data would be considered personal data. The DPB, 2021 and the NGDF, both fail to take into account this difference.
Central Bank Digital Currencies: A solution to India’s financial woes or just a piece of the puzzle?
Central Bank Digital Currencies (CBDCs) have, over the last couple of years, stepped firmly into the global financial spotlight. India is no exception to this trend, with both the Reserve Bank of India (RBI) and the Finance Minister referring to an Indian CBDC that is currently under development.
With the introduction of this CBDC a matter of when and not if, India and many other countries stand on the precipice of re-imagining their financial systems. It is therefore imperative that any attempt at introducing a CBDC is preceded by a detailed analysis of its scope, benefits, limitations, and how it has been implemented in other jurisdictions. This policy brief looks to achieve that by examining the form that a CBDC could take, what its policy goals would be in India, the considerations the RBI would have to account for and whether a CBDC would work in present-day India. Finally, it also looks at the case of Nigeria to draw insights that could also be applied to the introduction and operationalisation of a CBDC in the Indian context.
The full issue brief can be accessed here.
Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These comments examine whether the proposed amendments are in adherence to established principles of constitutional law, intermediary liability and other relevant legal doctrines. We thank the Ministry of Electronics and Information Technology (MEITY) for allowing us this opportunity. Our comments are divided into two parts. In the first part, we reiterate some of our comments to the existing version of the rules, which we believe holds relevance for the proposed amendments as well. And in the second part, we provide issue-wise comments that we believe need to be addressed prior to finalising the amendments to the rules.
To access the full text of the Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, click here
What Are The Consumer Protection Concerns With Crypto-Assets?
The article was published in Medianama on July 8, 2022
Crypto-asset regulation is at the forefront of India’s financial regulator’s minds. On the 6th of June, the Securities and Exchange Board of India (SEBI) in a response to the Parliamentary Standing Committee on Finance expressed clear consumer protection concerns associated with crypto-assets.
This statement follows multiple notices issued by the Reserve Bank of India (RBI) warning consumers of the risks related to crypto-assets, and even a failed attempt to prevent banks from transacting with any individual trading crypto-assets. Yet, in spite of these multiple warnings, and a significant drop in trading volume due to the introduction of a new taxation structure, crypto-assets still have managed to establish themselves as a legitimate financial instrument in the minds of many.
Recent global developments, however, seem to validate the concerns held by both the RBI and SEBI.
The bear market that crypto finds itself in has sent shockwaves throughout the ecosystem, crippling some of the most established tokens in the space. Take, for example, the death spiral of the algorithmic stablecoin Terra USD and its sister token Luna—with Terra USD going from a top-10-traded crypto-token to being practically worthless. The volatility of token prices has had a significant knock-on effect on crypto-related services. Following Terra’s crash, the Centralised Finance Platform (CeFi) Celsius—which provided quasi-banking facilities for crypto holders—also halted all withdrawals. More recently, the crypto-asset hedge fund Three Arrows also filed for bankruptcy following its inability to meet its debt obligations and protect its assets from creditors looking to get their money back.
Underpinning these stories of failing corporations are the very real experiences of investors and consumers—many of whom have lost a significant amount of wealth. This has been a direct result of the messaging around crypto-assets. Crypto-assets have been promoted through popular culture as a means of achieving financial freedom and accruing wealth quickly. It is this narrative that lured numerous regular citizens to invest substantial portions of their income into crypto-asset trading. At the same time, the crypto-asset space is littered with a number of scams and schemes designed to trick unaware consumers. These schemes, primarily taking the form of ‘pump and dump’ schemes, represent a significant issue for investors in the space.
It seems, therefore, that any attempt to ensure consumer protection in the crypto-space must adopt two key strategies:
- First, it must re-orient the narrative from crypto as a simple means of getting wealthy—and ensure that those consumers who invest in crypto do so with full knowledge of the risks associated with crypto-assets
- Second, it must provide consumers with sufficient recourse in cases where they have been subject to fraud.
In this article, we examine the existing regulatory framework around grievance redressal for consumers in India—and whether these safeguards are sufficient to protect consumers trading crypto-assets. We further suggest practical measures that the government can adopt going forward.
What is the Current Consumer Protection Framework Around Crypto-assets?
Safeguards Under the Consumer Protection Act and E-commerce Rules
The increased adoption of e-commerce by consumers in India forced legislators to address the lack of regulation for the protection of consumer interests. This legislative expansion may extend to protecting the interests of investors and consumers trading in crypto-assets.
The groundwork for consumer welfare was laid in the new Consumer Protection Act, 2019 which defined e-commerce as the “buying or selling of goods or services including digital products over digital or electronic network.” It also empowered the Union Government to take measures and issue rules for the protection of consumer rights and interests, and the prevention of unfair trade practices in e-commerce.
Within a year, the Union Government exercised its power to issue operative rules known as the Consumer Protection (E-Commerce) Rules, 2020 (the “Rules”), which amongst other things, sought to prohibit unfair trade practices across all models of e-commerce. The Rules define an e-commerce entity as one which owns, operates or manages a digital or electronic facility or platform (which includes a website as well as mobile applications) for electronic commerce.
The definition of e-commerce is not limited only to physical goods but also includes services as well as digital products. So, one can plausibly assume that it would be applicable to a number of crypto-exchanges, as well as certain entities offering decentralized finance (DeFi) services. This is because crypto tokens—be it cryptocurrencies like Bitcoin, Ethereum, or Dogecoin—are not considered currency or securities within Indian law, but can be said to be digital products since they are digital goods.
The fact that the digital products being traded on the e-commerce entity originated outside Indian territory would make no difference as far as the applicability of the Rules is concerned. The Rules apply even to e-commerce entities not established in India, but which systematically offer goods or services to consumers in India. The concept of systematically offering goods or services across territorial boundaries appears to have been taken from the E-evidence Directive of the European Union and seeks to target only those entities which intend to do substantial business within India while excluding those who do not focus on the Indian market and have only a minuscule presence here.
Additionally, the Rules impose certain duties and obligations on e-commerce entities, such as:
- The appointment of a nodal officer or a senior designated functionary who is resident in India, to ensure compliance with the provisions of the Consumer Protection Act;
- The prohibition on the adoption of any unfair trading practices, thereby making the most important requirements of consumer protection applicable to e-commerce;
- The establishment of a grievance redressal mechanism and specifying an outer limit of one month for redressal of complaints;
- The prohibition on imposing cancellation charges on the consumer, unless a similar charge is also borne by the e-commerce entity if it cancels the purchase order unilaterally for any reason;
- The prohibition on price manipulation to gain unreasonable profit by imposing an unjustified price on the consumers;
- The prohibition on discrimination between consumers of the same class or an arbitrary classification of consumers that affects their rights; etc.
The Rules also impose certain liabilities on e-commerce entities relating to the tracking of shipments, the accuracy of the information on the goods or services being offered, information and ranking of sellers, tracking complaints, and information regarding payment mechanisms. Most importantly, the Rules explicitly make the grievance redressal mechanism under the Consumer Protection Act, 2019 applicable to e-commerce entities in case they violate any of the requirements under the Rules.
What this means is that at present crypto-exchanges and crypto-service providers clearly fall within the ambit of consumer protection legislation in India. In real terms, this means that consumers can rest assured that in any crypto transaction their rights must be accounted for by the corporation.
With crypto related scams exploding globally following 2021, it is likely that Indian investors will come into contact, or be subject to various scams and schemes in the crypto marketplace. Therefore, it is imperative that consumers and investors the steps they can take in case they fall victim to a scam. Currently, any consumer who is the victim of a fraud or scam in the crypto space would as per the current legal regime, have two primary redressal remedies:
- Lodging a criminal complaint with the police, usually the cyber cell, regarding the fraud. It then becomes the police’s responsibility to investigate the case, trace the perpetrators, and ensure that they are held accountable under relevant legal provisions.
- Lodging a civil complaint before the consumer forum or even the civil courts claiming compensation and damages for the loss caused. In this process, the onus is on the consumer to follow up and prove that they have been defrauded.
Filing a consumer complaint may impose an extra burden on the consumer to prove the fraud—especially if the consumer is unable to get complete and accurate information regarding the transaction. Additionally, in most cases, a consumer complaint is filed when the perpetrator is still accessible and can be located by the consumer. However, in case the perpetrator has absconded, the consumer would have no choice but to lodge a criminal complaint. That said, if the perpetrators have already absconded, it may be difficult even for the police to be of much help considering the anonymity that is built into technology.
Therefore, perhaps the best protection that can be afforded to the consumer is where the regulatory regime is geared towards the prevention of frauds and scams by establishing a licensing and supervisory regime for crypto businesses.
A Practical Guide to Consumer Protection and Crypto-assets
What is apparent is that existing regulations are not sufficient to cover the extent of protection that a crypto-investor would require. Ideally, this gap would be covered by dedicated legislation that looks to cover the range of issues within the crypto-ecosystem. However, in the absence of the (still pending) government crypto bill, we are forced to consider how consumers can currently be protected and made aware of the risks associated with crypto-assets.
On the question of informing customers of the risks associated, we must address one of the primary means through which consumers become aware of crypto-assets: advertising. Currently, crypto-asset advertising follows a code set down by the Advertising Standards Council of India, a self-regulating, non-government body. As such, there is currently no government body that enforces binding advertising standards on crypto and crypto-service providers.
While self-regulation has generally been an acceptable practice in the case of advertising, the advertising of financial products has differed slightly. For example, Schedule VI of the Securities and Exchange Board of India (Mutual Funds) Regulations, 1996, lays down detailed guidelines associated with the advertising of mutual funds. Crypto-assets can, depending on their form, perform similar functions to currencies, securities, and assets. Moreover, they carry a clear financial risk—as such their advertising should come under the purview of a recognised financial regulator. In the absence of a dedicated crypto bill, an existing regulator—such as SEBI or the RBI—should use their ad-hoc power to bring crypto-assets and their advertising under their purview.
This would allow for the government to not only ensure that advertising guidelines are followed, but to dictate the exact nature of these guidelines. This allows it to issue standards pertaining to disclaimers and prevent crypto service providers from advertising crypto as being easy to understand, having a guaranteed return on investment, or other misleading messages.
Moreover, financial institutions such as the RBI and SEBI may consider increasing efforts to inform consumers of the financial and economic risks associated with crypto-assets by undertaking dedicated public awareness campaigns. Strongly enforced advertising guidelines, coupled with widespread and comprehensive awareness efforts, would allow the average consumer to understand the risks associated with crypto-assets, thereby re-orienting the prevailing narrative around them.
On the question of providing consumers with clear recourse, current financial regulators might consider setting up a joint working group to examine the extent of financial fraud associated with crypto-assets. Such a body can be tasked with providing consumers with clear information related to crypto-asset scams and schemes, how to spot them, and the next steps they must take in case they fall victim to one.
Aman Nair is a policy officer at the Centre for Internet & Society (CIS), India, focusing on fintech, data governance, and digital cooperative research. Vipul Kharbanda is a non-resident fellow at CIS, focusing on the fintech research agenda of the organisation.
Deployment of Digital Health Policies and Technologies: During Covid-19
Digitisation of public services in India began with taxation, land record keeping, and passport details recording, but it was soon extended to cover most governmental services - with the latest being public health. The digitisation of healthcare system in India had begun prior to the pandemic. However, given the push digital health has received in recent years especially with an increase in the intensity of activity during the pandemic, we thought it is important to undertake a comprehensive study of India's digital health policies and implementation. The project report comprises a desk-based research review of the existing literature on digital health technologies in India and interviews with on-field healthcare professionals who are responsible for implementing technologies on the ground.
The report by Privacy International and the Centre for Internet & Society can be accessed here.
Surveillance Enabling Identity Systems in Africa: Tracing the Fingerprints of Aadhaar
In this report, we identify the different external actors that influencing this “developmental” agenda. These range from philanthropic organisations, private companies, and technology vendors, to state and international institutions. Most notable among these is the World Bank, whose influence we investigated in the form of case studies of Nigeria and Kenya. We also explored the role played by the “success” of the Aadhaar programme in India on these new ID systems. A key characteristic of the growing “digital identity for development” trend is the consolidation of different databases that record beneficiary data for government programmes into one unified platform, accessed by a unique biometric ID. This “Aadhaar model” has emerged as a default model to be adopted in developing countries, with little concern for the risks it introduces. Read and download the full report here.
NHA Data Sharing Guidelines – Yet Another Policy in the Absence of a Data Protection Act
Reviewed and edited by Anubha Sinha
Launched in 2018, PM-JAY is a public health insurance scheme set to cover 10 crore poor and vulnerable families across the country for secondary and tertiary care hospitalisation. Eligible candidates can use the scheme to avail of cashless benefits at any public/private hospital falling under this scheme. Considering the scale and sensitivity of the data, the creation of a well-thought-out data-sharing document is a much-needed step. However, the document – though only a draft – has certain portions that need to be reconsidered, including parts that are not aligned with other healthcare policy documents. In addition, the guidelines should be able to work in tandem with the Personal Data Protection Act whenever it comes into force. With no prior intimation of the publication of the guidelines, and the provision of a mere 10 days for consultation, there was very little scope for stakeholders to submit their comments and participate in the consultation. While the guidelines pertain to the PM-JAY scheme, it is an important document to understand the government’s concerns and stance on the sharing of health data, especially by insurance companies.
Definitions: Ambiguous and incompatible with similar policy documents
The draft guidelines add to the list of health data–related policies that have been published since the beginning of the pandemic. These include three draft health data management policies published within two years, which have already covered the sharing and management of health data. The draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; in this case, the guidelines fail to refer to the draft National Digital Health Data Management Policy (published in April 2022). To add to this, the document – by placing the definitions at the end – is difficult to read and understand, especially when terms such as ‘beneficiary’, ‘data principal’, and ‘individual’ are used interchangeably. In the same vein, the document uses the terms ‘data principal’ and ‘data fiduciary’, and the definitions of health data and personal data, from the 2019 PDP Bill, while also referring to the IT Act SDPI Rules and its definition of ‘sensitive personal data’. While the guidelines state that the IT Act and Rules will be the legislation to refer to for these guidelines, it is to be noted that the IT Act under the SPDI Rules covers ‘body corporates’, which under Section 43A(1), is defined as “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities;”. It is difficult to add responsibility and accountability to the organisations under the guidelines when they might not even be covered under this definition.
With each new policy, civil society organisations have been pointing out the need to have a data protection act before introducing policies and guidelines that deal with the processing and sharing of the data of individuals. Ideally, these policies – even in draft form – should have been published after the Personal Data Protection Bill was enacted, to ensure consistency with the provisions of the law. For example, the guidelines introduce a new category of governance mechanisms under the data-sharing committee headed by a data-sharing officer (DSO). The responsibilities and powers of the DSO are similar to that of the data protection officer under the draft PDP Bill as well as the National Data Health Management Policy (NHDMP). This, in turn, raises the question of whether the DSO and the DPOs under both the PDP Bill and the draft NDMP will have the same responsibilities. Clarity in terms of which of the policies are in force and how they intersect is needed to ensure a smooth implementation. Ideally, having multiple sources of definitions should be addressed at the drafting stage itself.
Guiding Principles: Need to look beyond privacy
The guidelines enumerate certain principles to govern the use, collection, processing, and transmission of the personal or sensitive personal data of beneficiaries. These principles are accountability, privacy by design, choice and consent, openness/transparency, etc. While these provisions are much needed, their explanation at times misses the mark of why these principles were added. For example, in the case of accountability, the guidelines state that the ‘data fiduciary’ shall be accountable for complying with measures based on the guiding principles However, it does not specify who the fiduciaries would be accountable to and what the steps are to ensure accountability. Similarly, in the case of openness and transparency, the guidelines state that the policies and practices relating to the management of personal data will be available to all stakeholders. However, openness and transparency need to go beyond policies and practices and should consider other aspects of openness, including open data and the use of open-source software and open standards. This again will add to transparency, in that it would specify the rights of the data principal, as the current draft looks at the rights of the data principal merely from a privacy perspective. In the case of purpose limitation as well, the guidelines are tied to the privacy notice, which again puts the burden on the individual (in this case, beneficiary) when the onus should actually be on the data fiduciary. Lastly, under the empowerment of beneficiaries, the guidelines state that the “data principal shall be able to seek correction, amendments, or deletion of such data where it is inaccurate;”. The right to deletion should not be conditional on inaccuracy, especially when entering the scheme is optional and consent-based.
Data sharing with third parties without adequate safeguards
The guidelines outline certain cases where personal data can be collected, used, or disclosed without the consent of the individual. One of these cases is when the data is anonymised. However, the guidelines do not detail how this anonymisation would be achieved and ensured through the life cycle of the data, especially when the clause states that the data will also be collected without consent. The guidelines also state that the anonymised data could be used for public health management, clinical research, or academic research. The guidelines should have limited the scope of academic research or added certain criteria to gain access to the data; the use of vague terminology could lead to this data (sometimes collected without consent) being de-anonymised or used for studies that could cause harm to the data principal or even a particular community. The guidelines state that the data can be shared as ‘protected health information’ with a government agency for oversight activities authorised by law, epidemic control, or in response to court orders. With the sharing of data, care should be taken to ensure data minimisation and purpose limitations that go beyond the explanations added in the body of the guidelines. In addition, the guidelines also introduce the concept of a ‘clean room’, which is defined as “a secure sandboxed area with access controls, where aggregated and anonymised or de-identified data may be shared for the purposes of developing inference or training models”. The definition does not state who will be developing these training models; it could be a cause of worry if AI companies or even insurance companies have the potential to use this data to train models that could eventually make decisions based on the results. The term ‘sandbox’ is explained under the now revoked DP Bill 2021 as “such live testing of new products or services in a controlled or test regulatory environment for which the Authority may or may not permit certain regulatory relaxations for a
specified period for the limited purpose of the testing”. Neither the 2019 Bill nor the IT Act/Rules defines ‘sandbox’; the guidelines should have ideally spent more time explaining how the sandbox system in the ‘Clean Room’ works.
Conclusion
The draft Data Sharing Guidelines are a welcome step in ensuring that the entities sharing and processing data have guidelines to adhere to, especially since the Data Protection Bill has not been passed yet. The mention of the best practices for data sharing in annexures, including practices for people who have access to the data, is a step in the right direction, which could be made better with regular training and sensitisation. While the guidelines are a good starting point, they still suffer from the issues that have been highlighted in similar health data policies, including not referring to older policies, adding new entities, and the reliance on digital and mobile technology. The guidelines could have added more nuance to the consent and privacy by design sections to ensure other forms of notice, e.g., notice in audio form in different Indian languages. While PM-JAY aims to reach 10 crore poor and vulnerable families, there is a need to look at how to ensure that consent is given according to the guidelines that are “free, informed, clear, and specific”.
Getting the (Digital) Indo-Pacific Economic Framework Right
The article was originally published in Directions on 16 September 2022.
It is still early days. Given the broad and noncommittal scope of the economic arrangement, it is unlikely that the IPEF will lead to a trade deal among members in the short run. Instead, experts believe that this new arrangement is designed to serve as a ‘framework or starting point’ for members to cooperate on geo-economic issues relevant to the Indo-Pacific, buoyed in no small part by the United States’ desire to make up lost ground and counter Chinese economic influence in the region.
United States Trade Representative (USTR) Katherine Tai has underscored the relevance of the Indo-Pacific digital economy to the US agenda with the IPEF. She has emphasized the importance of collaboratively addressing key connectivity and technology challenges, including standards on cross-border data flows, data localisation and online privacy, as well as the discriminatory and unethical use of artificial intelligence. This is an ambitious agenda given the divergence among members in terms of technological advancement, domestic policy preferences and international negotiating stances at digital trade forums. There is a significant risk that imposing external standards or values on this evolving and politically-contested digital economy landscape will not work, and may even undermine the core potential of the IPEF in the Indo-Pacific. This post evaluates the domestic policy preferences and strategic interests of the Framework’s member states, and how the IPEF can navigate key points of divergence in order to achieve meaningful outcomes.
State of domestic digital policy among IPEF members
Data localisation is a core point of divergence in global digital policymaking. It continues to dominate discourse and trigger dissent at all international trade forums, including the World Trade Organization. IPEF members have a range of domestic mandates restricting cross-border flows, which vary in scope, format and rigidity (see table below). Most countries only have a conditional data localisation requirement, meaning data can only be transferred to countries where it is accorded an equivalent level of protection – unless the individual whose data is being transferred consents to said transfer. Australia and the United States have sectoral localisation requirements for health and defence data respectively. India presently has multiple sectoral data localisation requirements. In particular, a 2018 Reserve Bank of India (RBI) directive imposed strict local storage requirements along with a 24-hour window for foreign processing of payments data generated in India. The RBI imposed a moratorium on the issuance of new cards by several US-based card companies until compliance issues with the data localisation directive were resolved. Furthermore, several iterations of India’s recently withdrawn Personal Data Protection Bill contained localisation requirements for some categories of personal data.
Indonesia and Vietnam have diluted the scopes of their data localisation mandates to apply, respectively, only to companies providing public services and to companies not complying with other local laws. These dilutions may have occurred in response to concerted pushback from foreign technology companies operating in these countries. In addition to sectoral restrictions on the transfer of geospatial data, South Korea retains several procedural checks on cross-border flows, including formalities regarding providing notice to individual users.
Moving onto another issue flagged by USTR Tai, while all IPEF members recognise the right to information privacy at an overarching or constitutional level, the legal and policy contours of data protection are at different stages of evolution in different countries. Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore and Thailand have data protection frameworks in place. Data protection frameworks in India and Brunei are under consultation. Notably, the US does not have a comprehensive federal framework on data privacy, although there are patchworks of data privacy regulations at both the federal and state levels.
Regulation and strategic thinking on artificial intelligence (AI) are also at varying levels of development among IPEF members. India has produced a slew of policy papers on Responsible Artificial Intelligence. The most recent policy paper published by NITI AAYOG (the Indian government’s think tank) refers to constitutional values and endorses a risk-based approach to AI regulation, much like that adopted by the EU. The US National Security Commission on Artificial Intelligence (NSCAI), chaired by Google CEO Eric Schmidt, expressed concerns about the US ceding AI leadership ground to China. The NSCAI’s final report emphasised the need for US leadership of a ‘coalition of democracies’ as an alternative to China’s autocratic and control-oriented model. Singapore has also made key strides on trusted AI, launching A.I. verify – the world’s first AI Governance Testing Framework for companies that wish to demonstrate their use of responsible AI through a minimum verifiable product.
IPEF and pipe dreams of digital trade
Some members of the IPEF are signatories to other regional trade agreements. With the exception of Fiji, India and the US, all the IPEF countries are members of the Regional Comprehensive Economic Partnership (RCEP), which also includes China. Five IPEF member countries are also members of the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP) that President Trump backed out of in 2017. Several IPEF members also have bilateral or trilateral trading agreements among themselves, an example being the Digital Economic Partnership Agreement (DEPA) between Singapore, New Zealand and Chile.
All these ‘mega-regional’ trading agreements contain provisions on data flows, including prohibitions on domestic legal provisions that mandate local computing facilities or restrict cross-border data transfers. Notably, these agreements also incorporate exceptions to these rules. The CPTPP includes within its ambit an exception on the grounds of ‘legitimate public policy objectives’ of the member, while the RCEP incorporates an additional exception for ‘essential security interests’.
IPEF members are also spearheading multilateral efforts related to the digital economy: Australia, Japan and Singapore are working as convenors of the plurilateral Joint Statement Initiative (JSI) at the World Trade Organization (WTO), which counts 86 WTO members as parties. India (along with South Africa) vehemently opposes this plurilateral push on the grounds that the WTO is a multilateral forum functioning on consensus and a plurilateral trade agreement should not be negotiated within the aegis of the WTO. They fear, rightly, that such gambits close out the domestic policy space, especially for evolving digital economy regimes where keen debate and contestation exist among domestic stakeholders. While wary of the implications of the JSI, other IPEF members, such as Indonesia, have cautiously joined the initiative to ensure that they have a voice at the table.
It is unlikely that the IPEF will lead to a digital trade arrangement in the short run. Policymaking on issues as complex as the digital economy that must respond to specific social, economic and (geo)political realities cannot be steamrolled through external trade agreements. For instance, after the Los Angeles Ministerial India opted out of the IPEF trade pillar citing both India’s evolving domestic legislative framework on data and privacy as well as a broader lack of consensus among IPEF members on several issues, including digital trade. Commerce Minister Piyush Goyal explained that India would wait for the “final contours” of the digital trade track to emerge before making any commitments.
Besides, brokering a trade agreement through the IPEF runs a risk of redundancy. Already, there exists a ‘spaghetti bowl’ of regional trading agreements that IPEF members can choose from, in addition to forming bilateral trade ties with each other.
This is why Washington has been clear about calling the IPEF an ‘economic arrangement’ and not a trade agreement. Membership does not imply any legal obligations. Rather than duplicating ongoing efforts or setting unrealistic targets, the IPEF is an opportunity for all players to shape conversations, share best practices and reach compromises, which could feed back into ongoing efforts to negotiate trade deals. For example, several members of RCEP have domestic data localisation mandates that do not violate trade deals because the agreement carves out exceptions that legitimise domestic policy decisions. Exchanges on how these exceptions work in future trade agreements could be a part of the IPEF arrangement and nudge states towards framing digital trade negotiations through other channels, including at the WTO. Furthermore, states like Singapore that have launched AI self-governance mechanisms could share best practices on how these mechanisms were developed as well as evaluations of how they have helped policy goals be met. And these exchanges shouldn’t be limited to existing IPEF members. If the forum works well, countries that share strategic interests in the region with IPEF members, including, most notably, the European Union, may also want to get involved and further develop partnerships in the region.
Countering China
Talking shop on digital trade should certainly not be the only objective of the IPEF. The US has made it clear that they want the message emanating from the IPEF ‘to be heard in Beijing’. Indeed, the IPEF offers an opportunity for the reassertion of US economic interests in a region where President Trump’s withdrawal from the CPTPP has left a vacuum for China to fill. Accordingly, it is no surprise that the IPEF has representation from several regions of the Indo-Pacific: South Asia, Southeast Asia and the Pacific.
This should be an urgent policy priority for all IPEF members. Since its initial announcement in 2015, the Digital Silk Road (DSR), the digital arm of China’s Belt and Road Initiative, has spearheaded massive investments by the Chinese private sector (allegedly under close control of the Chinese state) in e-commerce, fintech, smart cities, data centres, fibre optic cables and telecom networks. This expansion has also happened in the Indo-Pacific, unhampered by China’s aggressive geopolitical posturing in the region through maritime land grabs in the South China Sea. With the exception of Vietnam, which remains wary of China’s economic expansionism, countries in Southeast Asia welcome Chinese investments, extolling their developmental benefits. Several IPEF members – including Indonesia, Malaysia and Singapore – have associations with Chinese private sector companies, predominantly Huawei and ZTE. A study evaluating Indonesia’s response to such investments indicates that while they are aware of the risks posed by Chinese infrastructure, their calculus remains unaltered: development and capacity building remain their primary focuses. Furthermore, on the specific question of surveillance, given evidence of other countries such as the US and Australia also using digital infrastructure for surveillance, the threat from China is not perceived as a unique risk.
Setting expectations and approaches
Still, the risks of excessive dependence on one country for the development of digital infrastructure are well known. While the IPEF cannot realistically expect to displace the DSR, it can be utilised to provide countries with alternatives. This can only be done by issuing carrots rather than sticks. A US narrative extolling ‘digital democracy’ is unlikely to gain traction in a region characterised by a diversity of political systems that is focused on economic and development needs. At the same time, an excessive focus on thorny domestic policy issues – such as data localisation and the pipe dream of yet another mega-regional trade deal – could risk derailing the geo-economic benefits of the IPEF.
Instead, the IPEF must focus on capacity building, training and private sector investment in infrastructure across the Indo-Pacific. The US must position itself as a geopolitically reliable ally, interested in the overall stability of the digital Indo-Pacific, beyond its own economic or policy preferences. This applies equally to other external actors, like the EU, who may be interested in engaging with or shaping the digital economic landscape in the Indo-Pacific.
Countering Chinese economic influence and complementing security agendas set through other fora – such as the Quadrilateral Security Dialogue – should be the primary objective of the IPEF. It is crucial that unrealistic ambitions seeking convergence on values or domestic policy do not undermine strategic interests and dilute the immense potential of the IPEF in catalysing a more competitive and secure digital Indo-Pacific.
Table: Domestic policy positions on data localisation and data protection
Demystifying Data Breaches in India
Edited by Arindrajit Basu and Saumyaa Naidu
India saw a 62% drop in data breaches in the first quarter of 2022. Yet, it ranked fifth on the list of countries most hit by cyberattacks according to a 2022 report by Surfshark, a Netherlands-based VPN company. Another report on the cost of data breaches researched by the Ponemon Institute and published by IBM reveals that the breach of about 29500 records between March 2021 and March 2022 resulted in a 25% increase in the average cost from INR 165 million in 2021 to INR 176 million in 2022.
These statistics are certainly a cause for concern, especially in the context of India’s rapidly burgeoning digital economy shaped by the pervasive platformization of private and public services such as welfare, banking, finance, health, and shopping among others. Despite the rate at which data breaches occur and are reported in the media, there seems to be little information about how and when they are resolved. This post examines the discourse on data breaches in India with respect to their historical forms, with a focus on how the specific terminology to describe data security incidents has evolved in mainstream news media reportage.
While expert articulations of cybersecurity in general and data breaches in particular tend to predominate the public discourse on data privacy, this post aims to situate broader understandings of data breaches within the historical context of India’s IT revolution and delve into specific concepts and terminology that have shaped the broader discourse on data protection. The late 1990s and early 2000s offer a useful point of entry into the genesis of the data security landscape in India.
Data Breaches and their Predecessor Forms
The articulation of data security concerns around the late 1990s and early 2000s isn’t always consistent in deploying the phrase, ‘data breach’ to signal cybersecurity concerns in India. The terms such as ‘data/ identity theft’ and ‘data leak’ figure prominently in the public articulation of concerns with the handling of personal information by IT systems, particularly in the context of business process outsourcing (BPO) and e-commerce activities. Other pertinent terms such as “security breach”, “data security”, and ‘“cyberfraud” also capture the specificity of growing concerns around outsourced data to India. At the time, i.e. around mid-2000s regulatory frameworks were still evolving to accommodate and address the complexities arising from a dynamic reconfiguration of the telecommunications and IT landscape in India.
Some of the formative cases that instantiate the usage of the aforementioned terms are instructive to understand shifts in the reporting of such incidents over time. The earliest case during that period concerns a 2002 case concerning the theft and sale of source code by an IIT Kharagpur student who intended to sell the code to two undercover FBI agents who worked with the CBI to catch the thief. A straightforward case of data theft was framed by media stories around the time as a cybercrime involving the illegal sale of the source code of a software package, as software theft of intellectual property in the context of outsourcing and as an instance of industrial espionage in poor nations without laws protecting foreign companies. This case became the basis of the earliest calls for the protection of data privacy and security in the context of the Indian BPO sector. The Indian IT Act, 2000 at the time only covered unauthorized access and data theft from computers and networks without any provisions for data protection, interception or computer forgery. The BPO boom in India brought with it employment opportunities for India’s English-speaking, educated youth but in the absence of concrete data privacy legislation, the country was regarded as an unsafe destination for outsourcing aside from the political ramifications concerning the loss of American jobs.
In a major 2005 incident, employees of the Mphasis BFL call centre in Pune extracted sensitive bank account information of Citibank’s American customers to divert INR 1.90 crore into new accounts set up in India. The media coverage of this incident calls it India’s first outsourcing cyberfraud and a well planned scam, a cybercrime in a globalized world, and a case of financial fraud and a scam that required no hacking skills, and a case of data theft and misuse. Within the ambit of cybercrime, media reports of these incidents refer to them as cases of “fraud”, “scam” and “theft''.
Two other incidents in 2005 set the trend for a critical spotlight on data security practices in India. In a June 2005 incident, an employee of a Delhi-based BPO firm, Infinity e-systems, sold the account numbers and passwords of 1000 bank customers to the British Tabloid, The Sun. The Indian newspaper, Telegraph India, carried an online story headlined, “BPO Blot in British Backlash: Indian Sells Secret Data,” which reported that the employee, Kkaran Bahree, 24, was set up by a British journalist, Oliver Harvey. Harvey filmed Bahree accepting wads of cash for the stolen data. Bahree’s theft of sensitive information is described both as a data fraud and a leak in the above 2005 BBC story by Soutik Biswar. Another story on the incident calls it a “scam” involving the leakage of credit card information. The use of the term ‘leak’ appears consistently across other media accounts such as a 2005 story on Karan Bahree in the Times of India and another story in the Economic Times about the Australian Broadcasting Corporation’s (ABC) sting operation similar to the one in Delhi, describing the scam by the fraudsters as a leak of the online information of Australians. Another media account of the coverage describes the incident in more generic terms such as an “outsourcing crime”.
The other case concerned four former employees of Parsec technologies who stole classified information and diverted calls from potential customers, causing a sudden drop in the productivity of call centres managed by the company in November 2005. Another call centre fraud came to light in 2009 through a BBC sting operation in which British reporters went to Delhi and secretly filmed a deal with a man selling credit card and debit card details obtained from Symantec call centres, which sold software made by Norton. This BBC story uses the term “breach” to refer to the incident.
In the broader framing of these cases generally understood as cybercrime, which received transnational media coverage, the terms “fraud”, “leak”, “scam”, and “theft” appear interchangeably. The term “data breach” does not seem to be a popular or common usage in these media accounts of the BPO-related incidents. A broader sense of breach (of confidentiality, privacy) figures in the media reportage in implicitly racial terms of cultural trust, as a matter of ethics and professionalism and in the language of scandal in some cases.
These early cases typify a specific kind of cybercrime concerning the theft or misappropriation of outsourced personal data belonging to British or American residents. What’s remarkable about these cases is the utmost sensitivity of the stolen personal information including financial details, bank account and credit/debit card numbers, passwords, and in one case, source code. While these cases rang the alarm bells on the Indian BPO sector’s data security protocols, they also directed attention to concerns around the training of Indian employees on the ethics of data confidentiality and vetting through psychometric tests for character assessment. In the wake of these incidents, the National Association of Software and Service Companies (NASSCOM), an Indian non-governmental trade and advocacy group, launched a National Skills Registry for IT professionals to enable employers to conduct background checks in 2006.
These data theft incidents earned India a global reputation of an unsafe destination for business process outsourcing, seen to be lacking both, a culture of maintaining data confidentiality and concrete legislation for data protection at the time. Importantly, the incidents of data theft or misappropriation were also traceable back to a known source, a BPO employee or a group of malefactors, who often sold sensitive data belonging to foreign nationals to others in India.
The phrase “data leak” also caught on in another register in the context of the widespread use of camera-equipped mobile phones in India. The 2004 Delhi MMS case offers an instance of a date leak, recapitulating the language of scandal in moralistic terms.
The Delhi MMS Case
The infamous 2004 incident involved two underage Delhi Public School (DPS) students who recorded themselves in a sexually explicit act on a cellular phone. After a fall out, the male student passed the low-resolution clip on to his friend in which his female friend’s face is seen. The clip, distributed far and wide in India, ended up on the famous e-shopping and auction website, bazee.com leading to the arrest of the website’s CEO Avinash Bajaj for hosting the listing for sale. Another similar case in 2004 mimicked the mechanics of visual capture through hand-held MMS-enabled mobile phones. A two-minute MMS of a top South-Indian actress taking a shower went viral on the Internet in 2004, the year when another MMS of two prominent Bollywood actors kissing had already done the rounds. The MMS case also marked the onset of a national moral panic around the amateur uses of mobile phone technologies, capable of corrupting young Indian minds under a sneaky regime of new media modernity. The MMS case, not strictly the classic case of a data breach - non-visual information generally stored in databases - became an iconic case of a data leak framed in the media as a scandal that shocked the country, with calls for the regulation of mobile phone use in schools. The case continued its scandalous afterlife in a 2009 Bollywood film, Dev D and another 2010 film, Love, Sex and Dhokha,
Taken together, the BPO data thefts and frauds and the data leak scandals prefigure the contemporary discourse on data breaches in the second decade of the 21st century, or what may also be called the Decade of Datafication. The launch of the Indian biometric identity project, Aadhaar, in 2009, which linked access to public services and welfare delivery with biometric identification, resulted in large-scale data collection of the scheme’s subscribers. Such linking raised the spectre of state surveillance as alleged by the critics of Aadhaar, marking a watershed moment in the discourse on data privacy and protection.
Aadhaar Data Security and Other Data Breaches
Aadhaar was challenged in the Indian Supreme Court in 2012 when it was made mandatory for welfare and other services such as banking, taxation and mobile telephony. The national debate on the status of privacy as a cultural practice in Indian society and a fundamental right in the Indian Constitution led to two landmark judgments - the 2017 Puttaswamy ruling holding privacy to be a constitutional right subject to limitations and the 2018 Supreme Court judgment holding mandatory Aadhaar to be constitutional only for welfare and taxation but no other service.
While these judgments sought to rein in Aadhaar’s proliferating mandatory uses, biometric verification remained the most common mode of identity authentication with most organizations claiming it to be mandatory for various purposes. During the same period from 2010 onwards, a range of data security events concerning Aadhaar came to light. These included app-based flaws, government websites publishing Aadhaar details of subscribers, third party leaks of demographic data, duplicate and forged Aadhaar cards and other misuses.
In 2015, the Indian government launched its ambitious Digital India Campaign to provide government services to Indian citizens through online platforms. Yet, data security breach incidents continued to increase, particularly the trade in the sale and purchase of sensitive financial information related to bank accounts and credit card numbers. The online availability of a rich trove of data, accessible via a simple Google search without the use of any extractive software or hacking skills within a thriving shadow economy of data buyers and sellers makes India a particularly vulnerable digital economy, especially in the absence of robust legislation. The lack of awareness around digital crimes and low digital literacy further exacerbates the situation given that datafication via government portals, e-commerce, and online apps has outpaced the enforcement of legislative frameworks for data protection and cybersecurity.
In the context of Aadhaar data security issues, the term “data leak” seems to have more traction in media stories followed by the term “security breach”. Given the complexity of the myriad ways in which Aadhaar data has been breached, terms such as data leak and exposure (of 11 crore Indian farmers’ sensitive information) add to the specificity of the data security compromise. The term “fraud” also makes a comeback in the context of Aadhaar-related data security incidents. These cases represent a mix of data frauds involving fake identities, theft of thumb prints for instance from land registries and inadvertent data leaks in numerous incidents involving government employees in Jharkhand, voter ID information of Indian citizens in Andhra Pradesh and Telangana and activist reports of Indian government websites leaking Aadhaar data.
Aadhaar-related data security events parallel the increase in corporate data breaches during the decade of datafication. The term “data leak” again alternates with the term “data breach” in most media accounts while other terms such as “theft” and “scam” all but disappear in the media coverage of corporate data breaches.
From 2016 onwards, incidents of corporate data breaches in India continued to rise. A massive debit card data breach involving the YES Bank ATMs and point-of-sale (PoS) machines compromised through malware between May and July of 2016 resulted in the exposure of ATM PINs and non-personal identifiable information of customers. It went undetected for nearly three months. Another data leak in 2018 concerned a system run by Indane, a state-owned utility company, which allowed anyone to download private information on all Aadhaar holders including their names, services they were connected to and the unique 12-digit Aadhaar number. Data breaches continued to be reported in India concurrent with the incidents of data mismanagement related to Aadhaar. Some prominent data breaches included a cyberattack on the systems of airline data service provider SITA resulting in the leak of Air India passenger data, leakage of the personal details of the Common Admission Test (CAT) applicants, details of credit card and order preferences of Domino’s pizza customers on the dark web, leakage of COVID-19 patients’ test results leaked by government websites, user data of Justpay and Big Basket for sale on the dark web and an SBI data breach among others between 2019 and 2021.
The media reportage of these data breaches use the term “cyberattack” to describe the activities of hackers and cybercriminals operating within a shadow economy or the dark web. Recent examples of cyberattacks by hackers who leak user data for sale on the dark web include 8.2 terabytes of 110 million sensitive financial data (KYC details, Aadhaar, credit/debit cards and phone numbers) of the payments app MobiKwik users, 180 million Domino’s pizza orders (name, location, emails, mobile numbers), and Flipkart’s Cleartrip users’ data. In these incidents again, three terms appear prominently in the media reportage - cyberattack, data breach, and leak. The term “data breach” remains the most frequently used epithet in the media coverage of the lapses of data security. While it alternates with the term “leak” in the stories, the term “data breach” appears consistently across most headlines in the news stories.
The exposure of sensitive, personal, and non-personal data by public and private entities in India is certainly a cause for concern, given the ongoing data protection legislative vacuum.
The media coverage of data breaches tends to emphasize the quantum of compromised user data aside from the types of data exposed. The media framing of these breaches in quantitative terms of financial loss as well as the magnitude and the number of breaches certainly highlights the gravity of these incidents but harm to individual users is often not addressed.
Evolving Terminology and the Source of Data Harms
The main difference in the media reportage of the BPO cybersecurity incidents during the early aughts and the contemporary context of datafication is the usage of the term, “data breach”, which figures prominently in contemporary reportage of data security incidents but not so much in the BPO-related cybercrimes.
THe BPO incidents of data theft and the attendant fraud must be understood in the context of the anxieties brought on by a globalizing world of Internet-enabled systems and transnational communications. In most of these incidents regarded as cybercrimes, the language of fraud and scam ventures further to attribute such illegal actions of the identifiable malefactors to cultural factors such as lack of ethics and professionalism.The usage of the term “data leak” in these media reports functions more specifically to underscore a broader lapse in data security as well as a lack of robust cybersecurity laws. The broader term, “breach”, is occasionally used to refer to these incidents but the term, “data breach” doesn’t appear as such.
The term “data breach” gains more prominence in media accounts from 2009 onwards in the context of Aadhaar and the online delivery of goods and services by public and private players. The term “data breach” is often used interchangeably with the term “leak” within the broader ambit of cyberattacks in the corporate sector. The media reportage frames Aadhaar-related security lapses as instances of security/data breaches, data leaks, fraud, and occasionally scam.
In contrast to the handful of data security cases in the BPO sector, data breaches have abounded in the second decade of the twenty-first century. What further differentiates the BPO-related incidents to the contemporary data breaches is the source of the data security lapse. Most corporate data breaches remain attributable to the actions of hackers and cybercriminals while the BPO security lapses were traceable back to ex-employees or insiders with access to sensitive data. We also see in the coverage of the BPO-related incidents, the attribution of such data security lapses to cultural factors including a lack of ethics and professionalism often in racial overtones. The media reportage of the BBC and ABC sting operations suggests that the India BPOs lack of preparedness to handle and maintain personal data confidentiality of foreigners point to the absence of a privacy culture in India. Interestingly, this transnational attribution recurs in a different form in the national debate on Aadhaar and how Indians don’t care about their privacy.
The question of the harms of data breaches to individuals is also an important one. In the discourse on contemporary data breaches, the actual material harm to an individual user is rarely ever established in the media reportage and generally framed as potential harm that could be devastating given the sensitivity of the compromised data. The harm is reported to be predominantly a function of organizational cybersecurity weakness or attributed to hackers and cybercriminals.
The reporting of harm in collective terms of the number of accounts breached, financial costs of a data breach, the sheer number of breaches and the global rankings of countries with the highest reported cases certainly suggests a problem with cybersecurity and the lack of organizational preparedness. However, this collective framing of a data breach’s impact usually elides an individual user’s experience of harm. Even in the case of Aadhaar-related breaches - a mix of leaking data on government websites and other online portals and breaches - the notion of harm owing to exposed data isn’t clearly established. This is, however, different from the extensively documented cases of Aadhaar-related issues in which welfare benefits have been denied, identities stolen and legitimate beneficiaries erased from the system due to technological errors.
Future Directions of Research
This brief, qualitative foray into the media coverage of data breaches over two decades has aimed to trace the usage of various terms in two different contexts - the Indian BPO-related incidents and the contemporary context of datafication. It would be worth exploring at length, the relationship between frequent reports of data breaches, and the language used to convey harm in the contemporary context of a concrete data protection legislation vacuum. It would be instructive to examine the specific uses of the terms such as “fraud”, “leak”, “scam”, “theft” and “breach” in media reporting of such data security incidents more exhaustively. Such analysis would elucidate how media reportage shapes public perception towards the safety of user data and an anticipation of attendant harm as data protection legislation continues to evolve.
Especially with Aadhaar, which represents a paradigm shift in identity verification through digital means, it would be useful to conduct a sentiment analysis of how biometric identity related frauds, scams, and leaks are reported by the mainstream news media. A study of user attitudes and behaviours in response to the specific terminology of data security lapses such as the terms “breach”, “leak”, “fraud”, “scam”, “cybercrime”, and “cyberattack” would further contribute to how lay users understand the gravity of a data security lapse. Such research would go beyond expert understandings of data security incidents that tend to dominate media reportage to elucidate the concerns of lay users and further clarify the cultural meanings of data privacy.
‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific
This is a modified version of the post that appeared in The Strategist
By Arindrajit Basu with inputs from and review by Amrita Sengupta and Isha Suri
Later this month, UN member states elected American candidate Doreen Bogdan-Martin "the most important election you have never heard off" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was fiercely contested with Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “future of the internet” through the technical standards that underpin it. The “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.
As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.
What are standards and why do they matter
Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the Technical Barriers to Trade Agreement)
Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics estimated that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.
China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector participation (Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);
packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei owning the most number of 5G patents and has finalised more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well
Standards for Artificial Intelligence
A number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 AI standardisation white paper that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.
Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.
Introducing a new CIS-ASPI project
This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology palatable across the world even if the technology undermines core democratic values such as privacy, and anti-discrimination. China’s efforts at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.
Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.
For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit www.aspi.org.au/techdiplomacy and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two
Big Tech’s privacy promise to consumers could be good news — and also bad news
It remains to be seen whether Google’s Privacy Sandbox project will be truly privacy-preserving. (Reuters Illustration: Francois Lenoir)
In February, Facebook, rebranded as Meta, stated that its revenue in 2022 is anticipated to reduce by $10 billion due to steps undertaken by Apple to enhance user privacy on its mobile operating system. More specifically, Meta attributed this loss to a new AppTrackingTransparency feature that requires apps to request permission from users before tracking them across other apps and websites or sharing their information with and from third parties. Through this change, Apple effectively shut the door on “permissionless” internet tracking and has given consumers more control over how their data is used. Meta alleged that this would hurt small businesses benefiting from access to targeted advertising services and charged Apple with abusing its market power by using its app store to disadvantage competitors under the garb of enhancing user privacy.
Access the full article published in the Indian Express on April 13, 2022
The Centre for Internet and Society’s comments and recommendations to the: The Digital Data Protection Bill 2022
High Level Comments
1. Rationale for removing the distinction between personal data and sensitive personal data is unclear.
All the earlier iterations of the Bill as well as the rules made under Section 43A of the Information Technology Act, 2000[1] had classified data into two categories; (i) personal data; and (ii) sensitive personal data. The 2022 version of the Bill has removed this distinction and clubbed all personal data under one umbrella heading of personal data. The rationale for this is unclear, as sensitive personal data means such data which could reveal or be related to eminently private data such as financial data, health data, sexual orientations and biometric data. Considering the sensitive nature of the data, the data classified as sensitive personal data is accorded higher protection and safeguards from processing, therefore by clubbing all data as personal data, the higher protection such as the need for explicit consent to the processing of sensitive personal data, the bar on processing of sensitive personal data for employment purposes has also been removed.
2. No clear roadmap for the implementation of the Bill
The 2018 Bill had specified a roadmap for the different provisions of the Bill to come into effect from the date of the Act being notified.[2] It specifically stated the time period within which the Authority had to be established and the subsequent rules and regulations notified.
The present Bill does not specify any such blueprint; it does not provide any details on either when the Bill will be notified or the time period within which the Board shall be established and specific Rules and regulations notified. Considering that certain provisions have been deferred to Rules that have to be framed by the Central government, the absence and/or delayed notification of such rules and regulations will impact the effective functioning of the Bill. Provisions such as Section 10(1) which deals with verifiable parental consent for data of children, Section 13 (1) which states the manner in which a Data Principal can initiate a right to correction, the process of selection and functioning of consent manager under 3(7) are few such examples, that when the Act becomes applicable, the data principal will have to wait for the Rules to Act of these provisions, or to get clarity on entities created by the Act.
The absence of any sunrise or sunset provision may disincentivise political or industrial will to support or enforce the provisions of the Bill. An example of such a lack of political will was the establishment of the Cyber Appellate Tribunal. The tribunal was established in 2006 to redress cyber fraud. However, it was virtually a defunct body from 2011 onwards when the last chairperson retired. It was eventually merged with the Telecom Dispute Settlement and Appellate Tribunal in 2017.
We recommend that Bill clearly lays out a time period for the implementation of the different provisions of the Bill, especially a time frame for the establishment of the Board. This is important to give full and effective effect to the right of privacy of the individual. It is also important to ensure that individuals have an effective mechanism to enforce the right and seek recourse in case of any breach of obligations by the data fiduciaries.
The Board must ensure that Data Principals and Fiduciaries have sufficient awareness of the provisions of this Bill before bringing the provisions for punishment into force. This will allow the Data Fiduciaries to align their practices with the provisions of this new legislation and the Board will also have time to define and determine certain provisions that the Bill has left the Board to define. Additionally enforcing penalties for offenses initially must be in a staggered process, combined with provisions such as warnings, in order to allow first time and mistaken offenders which now could include data principals as well, from paying a high price. This will relieve the fear of smaller companies and startups and individuals who might fear processing data for the fear of paying penalties for offenses.
3. Independence of Data Protection Board of India.
The Bill proposes the creation of the Data Protection Board of India (Board) in place of the Data Protection Authority. In comparison with the powers of the Board with the 2018 and 2019 version of Personal Data Protection Bill, we witness an abrogation of powers of the Board to be created, in this Bill. Under Clause 19(2), the strength and composition of the Board, the process of selection, the terms and conditions of appointment and service, and the removal of its Chairperson and other Members shall be such as may be prescribed by the Union Government at a later stage. Further as per Clause 19(3), the Chief Executive of the Board will be appointed by the Union Government and the terms and conditions of her service will also be determined by the Union Government. The functions of the Board have also not been specified under the Bill, the Central Government may assign the functions to be performed by the Board.
In order to govern data protection effectively, there is a need for a responsive market regulator with a strong mandate, ability to act swiftly, and resources. The political nature of personal data also requires that the governance of data, particularly the rule-making and adjudicatory functions performed by the Board are independent of the Executive.
Chapter Wise Comments and Recommendations
CHAPTER I- PRELIMINARY
● Definition: While the Bill has added a few new definitions to the Bill including terms such as gains, loss, consent manager etc. there are a few key definitions that have been removed from the earlier versions of the Bill. The removal of certain definitions in the Bill, eg. sensitive personal data, health data, biometric data, transgender status, creating a legal uncertainty about the application of the Bill.
With respect to the existing definitions as well the definition of the term ‘harm’ has been significantly reduced to remove harms such as surveillance from the ambit of harms. In addition, with respect of the definition of the term of harms also, the 2019 version of the Bill under Clause 2 (20) the definition provides a non exhaustive list of harms, by using the phrase “harms include”, however in the new definition the phrase has been altered to “harm”, in relation to a Data Principal, means”, thereby removing the possibility of more harms that are not apparent currently from being within the purview of the Act. We recommend that the definition of harms be made into a non-exhaustive list.
CHAPTER II - OBLIGATIONS OF DATA FIDUCIARY
Notice: The revised Clause on notice does away with the comprehensive requirements which were laid out under Clause 7 of the PDP Bill 2019. The current clause does not mention in detail what the notice should contain, while stating that that the notice should be itemised. While it can be reasoned that the Data Fiduciary can find the contents of the notice throughout the bill, such as with the rights of the Data Principal, the removal of a detailed list could create uncertainty for Data Fiduciaries. By leaving the finer details of what a notice should contain, it could cause Data Fiduciaries from missing out key information from the list, which in turn provide incomplete information to the Data Principal. Even in terms of Data Fiduciaries they might not know if they are complying with the provisions of the bill, and could result in them invariably being penalised. In addition to this by requiring less work by the Data Fiduciary and processor, the burden falls on the Data Principal to make sure they know how their data is processed and collected. The purpose of this legislation is to create further rights for individuals and consumers, hence the Bill should strive to put the individual at the forefront.
In addition to this Clause 6(3) of the Bill states “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English or any language specified in the Eighth Schedule to the Constitution of India.” While the inclusion of regional language notices is a welcome step, we suggest that the text be revised as follows “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English and in any language specified in the Eighth Schedule to the Constitution of India.” While the main crux of notice is to let the person know before giving consent, notice in a language that a person cannot read would not lead to meaningful consent.
Consent
Clause 3 of the Bill states “request for consent would have the contact details of a Data Protection Officer, where applicable, or of any other person authorised by the Data Fiduciary to respond to any communication from the Data Principal for the purpose of exercise of her rights under the provisions of this Act.” Ideally this provision should be a part of the notice and should be mentioned in the above section. This is similar to Clause 7(1)(c) of the draft Personal Data Protetion Bill 2019 which requires the notice to state “the identity and contact details of the data fiduciary and the contact details of the data protection officer, if applicable;”.
Deemed Consent
The Bill introduces a new type of consent that was absent in the earlier versions of the Bill. We are of the understanding that deemed consent is used to redefine non consensual processing of personal data. The use of the term deemed consent and the provisions under the section while more concise than the earlier versions could create more confusion for Data Principals and Fiduciaries alike. The definition and the examples do not shed light on one of the key issues with voluntary consent - the absence of notice. In addition to this the Bill is also silent on whether deemed consent can be withdrawn or if the data principal has the same rights as those that come from processing of data they have consented to.
Personal Data Protection of Children
The age to determine whether a person has the ability to legally consent in the online world has been intertwined with the age of consent under the Indian Contract Act; i.e. 18 years. The Bill makes no distinction between a 5 year old and a 17 year old- both are treated in the same manner. It assumes the same level of maturity for all persons under the age of 18. It is pertinent to note that the law in the offline world does recognise that distinction and also acknowledges the changes in the level of maturity. As per Section 82 of the Indian Penal Code read with Section 83, any act by a child under the age of 12 shall not be considered as an offence. While the maturity of those aged between 12–18 years will be decided by court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industry
There is a need to evaluate and rethink the idea that children are passive consumers of the internet and hence the consent of the parent is enough. Additionally, the bracketing of all individuals under the age of 18 as children fails to look at how teenages and young people use the internet. This is more important looking at the 2019 data which suggests that two-thirds of India’s internet users are in the 12–29 years age group, with those in the 12–19 age group accounting for about 21.5% of the total internet usage in metro cities. Given that the pandemic has compelled students and schools to adopt and adapt to virtual schools, the reliance on the internet has become ubiquitous with education. Out of an estimated 504 million internet users, nearly one-third are aged under 19. As per the Annual Status on Education Report (ASER) 2020, more than one-third of all schoolchildren are pursuing digital education, either through online classes or recorded videos.
Instead of setting a blanket age for determining valid consent, we could look at alternative means to determine the appropriate age for children at different levels of maturity, similar to what had been developed by the U.K. Information Commissioner’s Office. The Age Appropriate Code prescribes 15 standards that online services need to follow. It broadly applies to online services "provided for remuneration"—including those supported by online advertising—that process the personal data of and are "likely to be accessed" by children under 18 years of age, even if those services are not targeted at children. This includes apps, search engines, social media platforms, online games and marketplaces, news or educational websites, content streaming services, online messaging services.
The reservation to definition of child under the Bill has also been expressed by some members of the JPC through their dissenting opinion. MP Ritesh Pandey stated that keeping in mind the best interest of the child the Bill should consider a child to be a person who is less than 14 years of age. This would ensure that young people could benefit from the advances in technology without parental consent and reduce the social barriers that young women face in accessing the internet. Similarly Manish Tiwari in his dissenting note also observed that the regulation of the processing of data of children should be based on the type of content or data. The JPC Report observed that the Bill does not require the data fiduciary to take fresh consent of the child, once the child has attained the age of majority, and it also does not give the child the option to withdraw their consent upon reaching the majority age. It therefore, made the following recommendations:
Registration of data fiduciaries, exclusively dealing with children’s data. Application of the Majority Act to a contract with a child. Obligation of Data fiduciary to inform a child to provide their consent, three months before such child attains majority Continuation of the services until the child opts out or gives a fresh consent, upon achieving majority. However, these recommendations have not been incorporated into the provisions of the Bill. In addition to this the Bill is silent on the status of non consensual processing and deemed consent with respect to the data of children.
We recommend that fiduciaries who have services targeted at children should be considered as significant Data Fiduciaries. In addition to this the Bill should also state that the guardians could approach the Data Protection Board on behalf of the child. With these obligations in place, the age of mandatory consent could be reduced and the data fiduciary could have an added responsibility of informing the children in the simplest manner how their data will be used. Such an approach places a responsibility on Data Fiduciaires when implementing services that will be used by children and allows the children to be aware of data processing, when they are interacting with technology.
Chapter III-RIGHTS AND DUTIES OF DATA PRINCIPAL
Rights of Data Principal
Clause 12(3) of the Bill while providing the Data Principal the right to be informed of the identities of all the Data Fiduciaries with whom the personal data has been shared, also states that the data principal has the right to be informed of the categories of personal data shared. However the current version of the Bill provides only one category of data that is personal data.
Clause 14 of the Bill talks about the Right of Grievance Redressal, and states that the Data Principal has the right to readily available means of registering a grievance, however the Bill does not provide in the Notice provisions the need to mention details of a grievance officer or a grievance redressal mechanism. It is only the additional obligations on significant data fiduciary that mentions the need for a Data Protection officer to be the contact for the grievance redressal mechanism under the provisions of this Bill. The Bill could ideally re-use the provisions of the IT Act SPDI Rules 2011 in which Section 5(7) states “Body corporate shall address any discrepancies and grievances of their provider of the information with respect to processing of information in a time bound manner. For this purpose, the body corporate shall designate a Grievance Officer and publish his name and contact details on its website. The Grievance Officer shall redress the grievances or provider of information expeditiously but within one month ' from the date of receipt of grievance.”
The above framing would not only bring clarity to the data fiduciaries on what process to follow for a grievance redressal, it also would reduce the significant burden of theBoard.
Duties of Data Principals
The Bill while entisting duties of the Data Principal states that the “Data Principal shall not register a false or frivolous grievance or complaint with a Data Fiduciary or the Board”, however it is very difficult for a Data Principal to and even for the Board to determine what constitutes a “frivolous grievance”. In addition to this the absence of a defined notice provision and the inclusion of deemed consent would mean that the Data Fiduciary could have more information about the matter than the Data Principal. This could mean that the fiduciary could prove that a claim was false or frivolous. Clause 21(12) states that “At any stage after receipt of a complaint, if the Board determines that the complaint is devoid of merit, it may issue a warning or impose costs on the complainant.” In addition to this Clause 25(1) states that “ If the Board determines on conclusion of an inquiry that non- compliance by a person is significant, it may, after giving the person a reasonable opportunity of being heard, impose such financial penalty as specified in Schedule 1, not exceeding rupees five hundred crore in each instance.” The use of the term “person” in this case includes data which could mean that they could be penalised under the provisions of the Bill, which could also include not complying with the duties.
CHAPTER IV- SPECIAL PROVISIONS
Transfer of Personal Data outside India
Clause 17 of the Bill has removed the requirement of data localisation which the 2018 and 2019 Bill required. Personal data can be transferred to countries that will be notified by the central government. There is no need for a copy of the data to be stored locally and no prohibition on transferring sensitive personal data and critical data. Though it is a welcome change that personal data can be transferred outside of India, we would highlight the concerns in permitting unrestricted access to and transfer of all types of data. Certain data such as defence and health data do require sectoral regulation and ringfencing of the transfer of data.
Exemptions
Clause 18 of the Bill has widened the scope of government exemptions. Blanket exemption has been given to the State under Clause 18(4) from deleting the personal data even when the purpose for which the data was collected is no longer served or when retention is no longer necessary. The requirement of proportionality, reasonableness and fairness have been removed for the Central Government to exempt any department or instrumentality from the ambit of the Bill. By doing away with the four pronged test, this provision is not in consonance with test laid down by the Supreme Court and are also incompatible with an effective privacy regulation. There is also no provision for either a prior judicial review of the order by a district judge as envisaged by the Justice Srikrishna Committee Report or post facto review by an oversight committee of the order as laid down under the Indian Telegraph Rules, 1951[3] and the rules framed under Information Technology Act[4]. The provision states that such processing of personal data shall be subject to the procedure, safeguard and oversight mechanisms that may be prescribed.
[1] Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.
[2] Clause 97 of the 2018 Bill states“(1) For the purposes of this Chapter, the term ‘notified date’ refers to the date notified by the Central Government under sub-section (3) of section 1. (2)The notified date shall be any date within twelve months from the date of enactment of this Act. (3)The following provisions shall come into force on the notified date-(a) Chapter X; (b) Section 107; and (c) Section 108. (4)The Central Government shall, no later than three months from the notified date establish the Authority. (5)The Authority shall, no later than twelve months from the notified date notify the grounds of processing of personal data in respect of the activities listed in sub-section (2) of section 17. (6) The Authority shall no, later than twelve months from the date notified date issue codes of practice on the following matters-(a) notice under section 8; (b) data quality under section 9; (c) storage limitation under section 10; (d) processing of personal data under Chapter III; (e) processing of sensitive personal data under Chapter IV; (f) security safeguards under section 31; (g) research purposes under section 45;(h) exercise of data principal rights under Chapter VI; (i) methods of de-identification and anonymisation; (j) transparency and accountability measures under Chapter VII. (7)Section 40 shall come into force on such date as is notified by the Central Government for the purpose of that section.(8)The remaining provision of the Act shall come into force eighteen months from the notified date.”
[3] Rule 419A (16): The Central Government or the State Government shall constitute a Review Committee.
Rule 419 A(17): The Review Committee shall meet at least once in two months and record its findings whether the directions issued under sub-rule (1) are in accordance with the provisions of sub-section (2) of Section 5 of the said Act. When the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above it may set aside the directions and orders for destruction of the copies of the intercepted message or class of messages.
[4] Rule 22 of Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009: The Review Committee shall meet at least once in two months and record its findings whether the directions issued under rule 3 are in accordance with the provisions of sub-section (2) of section 69 of the Act and where the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above, it may set aside the directions and issue an order for destruction of the copies, including corresponding electronic record of the intercepted or monitored or decrypted information.
Comments to the proposed amendments to The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
Preliminary
In these comments, we examine the constitutional validity of the proposed amendments, as well as whether the language of the amendments provide sufficient clarity for its intended recipients. This commentary is in-line with CIS’ previous engagement with other iterations of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
General Comments
Ultra vires the parent act
Section 79(1) of the Information Technology (IT) Act states that the intermediary will not be held liable for any third-party information if the intermediary complies with the conditions laid out in Section 79(2). One of these conditions is that the intermediary observe “due diligence while discharging his duties under this Act and also observe such other guidelines as the Central Government may prescribe in this behalf.” Further, Section 87(2)(zg) empowers the central government to prescribe “guidelines to be observed by the intermediaries under sub-section (2) of section 79.”
A combined reading of Section 79(2) read with Section 89(2)(zg) makes it clear that the power of the Central Government is limited to prescribing guidelines related to the due diligence to be observed by the intermediaries while discharging its duties under the IT Act. However, the proposed amendments extend the original scope of the provisions within the IT Act.
In particular, the IT Act does not prescribe for any classification of intermediaries. Section 2(1) (w) of the Act defines intermediaries as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes”. Intermediaries are treated and regarded as a single monolithic entity with the same responsibilities and obligations.
The proposed amendments have now established a new category of intermediaries, namely online gaming intermediary. This classification comes with additional obligations, codified within Rule 4A of the proposed amendments, including enabling the verification of user-identity and setting up grievance redressal mechanisms. The additional obligations placed on online gaming intermediaries find no basis in the IT Act, which does not specify or demarcate between different categories of intermediaries.
The 2021 Rules have been prescribed under Section 87(1) and Section 87(2)(z) and (zg) of the IT Act. These provisions do not empower the Central Government to make any amendment to Section 2(w) or create any classification of intermediaries. As has been held by the Supreme Court in State of Karnataka and Another v. Ganesh Kamath & Ors that: “It is a well settled principle of interpretation of statutes that conferment of rule making power by an Act does not enable the rule making authority to make a rule which travels beyond the scope of the enabling Act or which is inconsistent therewith or repugnant thereto.” In this light, we argue that the proposed amendment cannot go beyond the parent act or prescribe policies in the absence of any law/regulation authorising them to do so.
Recommendation
We recommend that a regulatory intervention seeking to classify intermediaries and prescribe regulations specific to the unique nature of specific intermediaries should happen through an amendment to the parent act. The amendment should prescribe additional responsibilities and obligations of online gaming intermediaries.
A note on the following sections
Since the legality of classifying intermediaries into further categories is under question, our subsequent discussions on the language of the provisions related to online gaming intermediary are recommended to be taken into account for formulating any new legislations relating to these entities.
Specific comments
Fact checking amendment
Amendment to Rule 3(1)(b)(v) states that intermediaries are obligated to ask their users to not host any content that is, inter alia, “identified as fake or false by the fact check unit at the Press Information Bureau of the Ministry of Information and Broadcasting or other agency authorised by the Central Government for fact checking”.
Read together with Rule 3(1)(c), which gives intermediaries the prerogative to terminate user access to their resources on non-compliance with their rules and regulations, Rule 3(1)(b)(v) essentially affirms the intermediary’s right to remove content that the Central government deems to be ‘fake’. However, in the larger context of the intermediary liability framework of India, where intermediaries found to be not complying with the legal framework of section 79 lose their immunity, provisions such as Rule 3(1)(b)(v) compel intermediaries to actively censor content, on the apprehension of legal sanctions.
In this light, we argue that Rule 3(1)(b)(v) is constitutionally invalid, inasmuch that Article 19(2), which prescribes grounds under which the government restrict the right to free speech, does not permit restricting speech on the ground that it is ostensibly “fake or false”. In addition, the net effect of this rule would be that the government would be the ultimate arbiter of what is considered ‘truth’, and every contradictions to this narrative would be deemed to be false. In a democratic system like India’s, this cannot be a tenable position, and would go against a rich jurisprudence of constitutional history on the need for plurality.
For instance, in Indian Express Newspapers v Union of India, the Supreme Court had held that ‘the freedom of the press rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.’ Applying this interpretation to the present case, it could be said that the government’s monopoly on directing what constitutes “fake or false” in the online space would prevent citizens from accessing dissenting voices and counterpoints to government policies .
This is problematic when one considers that in the Indian context, freedom of speech and expression has always been valued for its instrumental role in ensuring a healthy democracy, and its power to influence public opinion. In the present case, the government, far from facilitating any such condition, is instead actively indulging in guardianship of the public mind (Sarkar et al, 2019).
Other provisions in the IT Act which permit for censorship of content, including section 69A, permit the government to only do so when content is relatable to grounds enumerated in Article 19(2) of the Constitution. In addition, in the case of Shreya Singhal vs Union of India, where, the constitutionality of section 69A was challenged, the Supreme Court upheld the provision because of the legal safeguards inherent in the provision, including offering a hearing to the originator of the impugned content and reasons for censoring content to be recorded in writing.
In contrast, a fact check by the Press Information Bureau or by another authorised agency provides no such safeguards, and does not relate to any constitutionally recognized ground for restricting speech.
Recommendation
The proposed amendment to Rule 3(1)(b)(v) is unconstitutional, and should be removed from the final draft of the law.
Clarifications are needed for online games rules definitions
The definitions of an "online game" and "online gaming intermediary" are currently extremely unclear and require further clarification.
As the proposed amendments stand, online games are characterised by the user's “deposit with the expectation of earning winnings”. Both deposit and winnings can be “cash” or “in kind", which does not adequately draw a boundary on the type of games this amendment seeks to cover. Can the time invested by the player in playing a game be answered under the “in kind” definition of deposit? If the game provides a virtual in-game currency that can be exchanged for internal power ups, even if there are no cash or gift cards used as payout, is that considered to be an “in kind” winnings? The rules, as currently drafted, are vague in their reference towards “in kind” deposits and payouts.
This definition of online games also does not differentiate between single or multiplayer games, and traditional games like chess which have found an audience online such as Candy Crush (single player), Minecraft (multiplayer collaborative) or chess (traditional). It is unclear whether these games were intended to fall within the purview of these amendments to the rules, and if they are all subjected to the same due diligence requirements as pay-to-play games. This, in conjunction with the proposed rule 6A which allows the Ministry to term any other game as an online game for the purposes of the rules, also provides them with broad, unpredictable powers . This ambiguity hinders clear comprehension of the expectations among the target stakeholders, thus affecting the consistency and predictability of the implementation of the rules.
Similarly, "online gaming intermediaries" are also defined very broadly as "intermediary that offers one or more than one online game". As defined, any intermediary that even hosts a link to a game is classified as an online gaming intermediary since the game is now "offered" through the intermediary. As drafted, there does not seem to be a material distinction between an "intermediary" as defined by the act and "online gaming intermediary" as specified by these rules.
Recommendation
We recommend further clarification on the definitions of these terms, especially for “in kind” and “offers” which are currently extremely vague terms that provide overbroad powers to the Ministry.
Intermediaries and Games
"Online gaming intermediaries" are defined very broadly as "intermediary that offers one or more than one online game". Intermediaries are defined in the Act as "any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message".
According to the media coverage (Barik, 2023) around these amendments, it seems that there is an effort to classify gaming companies as "online gaming intermediaries" but the language of the drafted amendments do not support this. An “intermediary” status is given to a company due to its functional role in primarily offering third party content. It is not a classification for different types of internet companies that exist and thus must not be used to make rules for entities that do not perform this function.
Not all gaming companies present a collection of games for their users to play. According to the drafted definition multiple platforms where games might be present like, an app stores where multiple game developers can publish their games for access by users, a website that lists links to online games, a social media platform that acts as an intermediary between two users exchanging links to games, as well as websites that host games for users to directly access may all be classified as an "online gaming intermediary" since they "offer" games to users. These are a rather broad range of companies and functions to be singularly classified an "online gaming intermediary".
Recommendation
We recommend a thoroughly researched legislative solution to regulating gaming companies that operate online rather than through amendments to intermediary rules. If some companies are indeed to be classified as “online gaming intermediaries”, there is a need for further reasoning on which type of gaming companies and their functions are intermediary functions for the purposes of these Rules.
Comments can be downloaded here
Civil Society’s second opinion on a UHI prescription
The article originally published by Internet Freedom Foundation can be accessed here.
The National Health Authority (NHA) released the Consultation Paper on Operationalising Unified Health Interface (UHI) in India on December 14, 2022. The deadline for submission of comments was January 13, 2023. We collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to submit comments on the paper.
Background
The UHI is proposed to be a “foundational layer of the Ayushman Bharat Digital Health Mission (ABDM)” and is “envisioned to enable interoperability of health services in India through open protocols”. The ABDM, previously known as the National Digital Health Mission, was announced by the Prime Minister on the 74th Independence Day, and it envisages the creation of a National Digital Health Ecosystem with six key features: Health ID, Digi Doctor, Health Facility Registry, Personal Health Records, Telemedicine, and e-Pharmacy. After launching the programme in six Union Territories, the National Health Authority issued a press release on August 26, 2020 announcing the public consultation for the Draft Health Data Management Policy for NDHM. While the government has repeatedly claimed that creation of a health ID is purely voluntary, contrary reports have emerged. In our comments as part of the public consultation, our primary recommendation was that deployment of any digital health ID programme must be preceded by the enactment of general and sectoral data protection laws by the Parliament of India; and meaningful public consultation which reaches out to vulnerable groups which face the greatest privacy risks.
As per the synopsis document which accompanies the consultation paper, it aims to “seek feedback on how different elements of UHI should function. Inviting public feedback will allow for early course correction, which will in-turn engender trust in the network and enhance market adoption. The feedback received through this consultation will be used to refine the functionalities of UHI so as to limit any operational issues going forward.” The consultation paper contains a set of close-ended questions at the end of each section through which specific feedback has been invited from interested stakeholders. We have collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to draft the comments on this consultation paper.
Our main concern relates to the approach the Government of India and concerned Ministries adopt to draft a consultation paper without explicitly outlining how the proposed UHI fits into the broader healthcare ecosystem and quantifying how it improves it rendering the consultation paper and public engagement efforts inadequate. Additionally, it doesn’t allow the public at large, and other stakeholders to understand how it may contribute to people’s access to quality care towards ensuring realisation of their constitutional right to health and health care. The close-ended nature of the consultation process, wherein specific questions have been posed, restricts stakeholders from questioning the structure of the ABDM itself and forces us to engage with its parts, thereby incorrectly assuming that there is support for the direction in which the ABDM is being developed.
Our submissions
A. General comments
a. Absence of underlying legal framework
Ensuring health data privacy requires legislation at three levels- comprehensive laws, sectoral laws and informal rules. Here, the existing proposal for the data protection legislation, i.e., the draft Digital Personal Data Protection Bill, 2022 (DPDPB, 2022) which could act as the comprehensive legal framework, is inadequate to sufficiently protect health data. This inadequacy arises from the failure of the DPDPB, 2022 to give higher degree of protection to sensitive personal data and allowing for non-consensual processing of health data in certain situations under Clause 8 which relates to “deemed consent”. Here, it may also be noted that the DPDPB, 2022 fails to specifically define either health or health data. Further, the proposed Digital Information Security in Healthcare Act, 2017, which may have acted as a sectoral law, is presently before the Parliament and has not been enacted. Here, the absence of safeguards allows for data capture by health insurance firms and subsequent exclusion/higher costs for vulnerable groups of people. Similarly, such data capture by other third parties potentially leads to commercial interests creeping in at the cost of users of health care services and breach of their privacy and dignity.
b. Issues pertaining to scope
Clarity is needed on whether UHI will be only providing healthcare services through private entities, or will also include the public health care system and various health care schemes and programs of the government, such as eSanjeevani.
c. Pre-existing concerns
- Exclusion: Access to health services through the Unified Health Interface should not be made contingent upon possessing an ABHA ID, as alluded to in the section on ‘UHI protocols in action: An example’ under Chapter 2(b). Such an approach is contrary to the Health Data Management Policy that is based on individual autonomy and voluntary participation. Clause 16.4 of the Policy clearly states that nobody will “be denied access to any health facility or service or any other right in any manner by any government or private entity, merely by reason of not creating a Health ID or disclosing their Health ID…or for not being in possession of a Health ID.” Moreover, the National Medical Commission Guidelines for Telemedicine in India also does not create any obligation for the patient to possess an ABHA ID in order to access any telehealth service. The UHI should explicitly state that a patient can log in on the network using any identification and not just ABHA.
- Consent: As per media reports, registration for a UHID under the NDHM, which is an earlier version of the ABHA number under the ABDM, may have been voluntary on paper but it was being made mandatory in practice by hospital administrators and heads of departments. Similarly, reports suggest that people who received vaccination against COVID-19 were assigned a UHID number without their consent or knowledge.
- Function creep: In the absence of an underlying legal framework, concerns also arise that the health data under the NDHM scheme may suffer from function creep, i.e., the collected data being used for purposes other than for which consent has been obtained. These concerns arise due to similar function creep taking place in the context of data collected by the Aarogya Setu application, which has now pivoted from being a contact-tracing application to “health app of the nation”. Here, it must be noted that as per a RTI response dated June 8, 2022 from NIC, the Aarogya Setu Data Access And Knowledge Sharing Protocol “has been discontinued".
- Issues with the United Payments Interface may be replicated by the UHI: The consultation paper cites the United Payments Interface (UPI) as “strong public digital infrastructure” which the UHI aims to leverage. However, a trend towards market concentration can be witnessed in UPI: the two largest entities, GooglePay and PhonePe, have seen their market share hover around 35% and 47% (by volume) for some time now (their share by value transacted is even higher). Meanwhile, the share of the NPCI’s own app (BHIM) has fallen from 40% in August 2017 to 0.74% in September 2021. Thus, if such a model is to be adopted, it is important to study the UPI model to understand such threats and ensure that a similar trend towards oligopoly or monopoly formation in UHI is addressed. This is all the more important in a country in which the decreasing share of the public health sector has led to skyrocketing healthcare costs for citizens.
B. Our response also addressed specific questions about search and discovery, service booking, grievance redressal, and fake reviews and scores. Our responses on these questions can be found in our comments here.
Our previous submissions on health data
We have consistently engaged with the government since the announcement of the NDHM in 2020. Some of our submissions and other outputs are linked below:
- IFF’s comment on the Draft Health Data Management Policy dated May 21, 2022 (link)
- IFF’s comments on the consultation Paper on Healthcare Professionals Registry dated July 20, 2021 (link)
- IFF and C-HELP Working Paper: ‘Analysing the NDHM Health Data Management Policy’ dated June 11, 2021 (link)
- IFF’s Consultation Response to Draft Health Data Retention Policy dated January 6, 2021 (link)
- IFF’s comments on the National Digital Health Mission’s Health Data Management Policy dated September 21, 2020 (link)
Important documents
- Response on the Consultation Paper on Operationalising Unified Health Interface (UHI) in India by Centre for Health Equity, Law & Policy, the Centre for Internet & Society, the Forum for Medical Ethics Society, & IFF dated January 13, 2023 (link)
- NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
- Synopsis of NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
CensorWatch: On the Implementation of Online Censorship in India
Abstract: State authorities in India order domestic internet service providers (ISPs) to block access to websites and services. We developed a mobile application, CensorWatch, that runs network tests to study inconsistencies in how ISPs conduct censorship. We analyse the censorship of 10,372 sites, with measurements collected across 71 networks from 25 states in the country. We find that ISPs in India rely on different methods of censorship with larger ISPs utilizing methods that are harder to circumvent. By comparing blocklists and contextualising them with specific legal orders, we find concrete evidence that ISPs in India are blocking different websites and engaging in arbitrary blocking, in violation of Indian law.
The paper authored by Divyank Katira, Gurshabad Grover, Kushagra Singh and Varun Bansal appeared as part of the conference on Free and Open Communications on the Internet (FOCI '23) and can be accessed here.
The authors would like to thank Pooja Saxena and Akash Sheshadri for contributing to the visual design of Censorwatch; Aayush Rathi, Amber Sinha and Vipul Kharbanda for their valuable legal inputs; Internet Freedom Foundation for their support; ipinfo.io for providing free access to their data and services. The work was made possible because of research grants to the Centre for Internet and Society from the MacArthur Foundation, Article 19, the East-West Management Institute and the New Venture Fund. Gurshabad Grover’s contributions were supported by a research fellowship from the Open Tech Fund.
CoWIN Breach: What Makes India's Health Data an Easy Target for Bad Actors?
The article was originally published in the Quint on 19 June 2023.
Last week, it was reported that due to an alleged breach of the CoWIN platform, details such as Aadhaar and passport numbers of Indians were made public via a Telegram bot.
While Minister of State for Information Technology Rajeev Chandrashekar put out information acknowledging that there was some form of a data breach, there is no information on how the breach took place or when a past breach may have taken place.
This data leak is yet another example of our health records being exposed in the recent past – during the pandemic, there were reports of COVID-19 test results being leaked online. The leaked information included patients’ full names, dates of birth, testing dates, and names of centres in which the tests were held.
In December last year, five servers of the All India Institute of Medical Science (AIIMS) in Delhi were under a cyberattack, leaving sensitive personal data of around 3-4 crore patients compromised.
In such cases, the Indian Computer Emergency Response Team (CERT-In) is the agency responsible for looking into the vulnerabilities that may have led to them. However, till date, CERT-In has not made its technical findings into such attacks publicly available.
The COVID-19 Pandemic Created Opportunity
The pandemic saw a number of digitisation policies being rolled out in the health sector; the most notable one being the National Digital Health Mission (or NDHM, later re-branded as the Ayushman Bharat Digital Mission).
Mobile phone apps and web portals launched by the central and state governments during the pandemic are also examples of this health digitisation push. The rollout of the COVID-19 vaccinations also saw the deployment of the CoWIN platform.
Initially, it was mandatory for individuals to register on CoWIN to get an appointment for vaccination, and there was no option for walk-in-registration or to book an appointment. But, the Centre subsequently modified this rule and walk-in appointments and registrations on CoWIN became permissible from June 2021.
However, a study conducted by the Centre for Internet and Society (CIS) found that states such as Jharkhand and Chhattisgarh, which have low internet penetration, permitted on-site registration for vaccinations from the beginning.
The rollout of the NDHM also saw Health IDs being generated for citizens.
In several reported cases across states, this rollout happened during the COVID-19 vaccination process – without the informed consent of the concerned person.
The beneficiaries who have had their Health IDs created through the vaccination process had not been informed about the creation of such an ID or their right to opt out of the digital health ecosystem.
A Web of Health Data Policies
Even before the pandemic, India was working towards a Health ID and a health data management system.
The components of the umbrella National Digital Health Ecosystem (NDHE) are the National Digital Health Blueprint published in 2019 (NDHB) and the NDHM.
The Blueprint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs. Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.
The National Health Authority (NHA), established in 2018, has been given the responsibility of implementing the National Digital Health Mission.
2018 also saw the Digital Information Security in Healthcare Act (DISHA), which was to regulate the generation, collection, access, storage, transmission, and use of Digital Health Data ("DHD") and associated personal data.
However, since its call for public consultation, no progress has been made on this front.
In addition to documents that chalk out the functioning and the ecosystem of a digitised healthcare system, the NHA has released policy documents such as:
-
the Health Data Management Policy (which was revised three times; the latest version released in April 2022)
-
the Health Data Retention Policy (released in April 2021)
-
Consultation paper on the Unified Health Interface (UHI) (released in December 2022)
Along with these policies, in 2022, the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) – India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines, published in August 2022, did not even refer to the draft National Digital Health Data Management Policy (published in April 2022).
Interestingly, the recent health data policies do not mention CoWIN. Failing to cross-reference or mention preceding policies creates a lack of clarity on which documents are being used as guidelines by healthcare providers.
Can a Data Protection Bill Be the Solution?
The draft Data Protection Bill, 2021, defined health data as “…the data related to the state of physical or mental health of the data principal and includes records regarding the past, present or future state of the health of such data principal, data collected in the course of registration for, or provision of health services, data associated with the data principal to the provision of specific health services.”
However, this definition as well as the definition of sensitive personal data was removed from the current version of the Bill (Digital Personal Data Protection Bill, 2022).
Omitting these definitions from the Bill removes a set of data which, if collected, warrants increased responsibility and increased liability. Handling of health data, financial data, government identifiers, etc, need to come with a higher level of responsibility as they are a list of sensitive details of a person.
The threats posed as a result of this data being leaked are not limited to spam messages or fraud and impersonation, but also of companies that can get a hand on this coveted data and gather insights and train their systems and algorithms, without the need to seek consent from anyone, or without facing the consequences of harm caused.
While the current version of the draft DPDP Bill states that the data fiduciary shall notify the data principal of any breach, the draft Bill also states that the Data Protection Board “may” direct the data fiduciary to adopt measures that remedy the breach or mitigate harm caused to the data principal.
The Bill also prescribes penalties of upto Rs 250 crore if the data fiduciary fails to take reasonable security safeguards to prevent a personal data breach, and a penalty of upto Rs 200 crore if the fiduciary fails to notify the data protection board and the data principal of such breach.
While these steps, if implemented through legislation, would make organisations processing data take their data security more seriously, the removal of sensitive personal data from the definition of the Bill, would mean that data fiduciaries processing health data will not have to take additional steps other than reasonable security safeguards.
The absence of a clear indication of security standards will affect data principals and fiduciaries.
Looking to bring more efficiency to governance systems, the Centre launched the Digital India Mission in 2015. The press release by the central government reporting the approval of the programme by the Cabinet of Ministers speaks of ‘cradle to grave’ digital identity as one of its vision areas.
The ambitious Universal Health ID and health data management policies are an example of this digitisation mission.
However breaches like this are reminders that without proper data security measures, and a system for having a person responsible for data security, the data is always vulnerable to an attack.
While the UK and Australia have also seen massive data breaches in the past, India is at the start of its health data digitisation journey and has the ability to set up strong security measures, employ experienced professionals, and establish legal resources to ensure that data breaches are minimised and swift action can be taken in case of a breach.
The first step to understand the vulnerabilities would be to present the CERT-In reports of this breach, and guide other institutions to check for the same so that they are better prepared for future breaches and attacks.
Health Data Management Policies - Differences Between the EU and India
This issue brief was reviewed and edited by Pallavi Bedi
Introduction
Health data has seen an increased interest the world over, on account of the amount of information and inferences that can be drawn not just about a person but also about the population in general. The Covid 19 pandemic also brought about an increased focus on health data, and brought players that earlier did not collect health data to be required to collect such data, including offices and public spaces. This increased interest has led to further thought on how health data is regulated and a greater understanding of the sensitivity of such data, because of which countries are in varying processes to get health data regulated over and above the existing data protection regulations. The regulations not only look at ensuring the privacy of the individual but also look at ways in which this data can be shared with companies, researchers and public bodies to foster innovation and to monetise this valuable data. However for a number of countries the effort is still on the digitisation of health data. India has been in the process of implementing a nationwide health ID that can be used by a person to get all their medical records in one place. The National Health Authority (NHA) has also since 2017 been publishing policies that look at the framework and ecosystem of health data, as well as the management and sharing of health data. However these policies and a scattered implementation of the health ID are being carried out without a data protection legislation in place. In comparison, Europe, which already has an established health Id system, and a data protection legislation (GDPR) is looking at the next stage of health data management through the EU Health Data Space (EUHDS). Through this issue brief we would like to highlight the differences in approaches to health data management taken by the EU and India, and look at possible recommendations for India, in creating a privacy preserving health data management policy.
Background
EU Health Data Space
The EU Health Data Space (EUHDS) was proposed by the EU Council as a way to create an ecosystem which combines rules, standards, practices and infrastructure, around health data under a common governance framework. The EUHDS is set to rely on two pillars; namelyMyHealth@EU and HealthData@EU, where MyHealth@EU facilitates easy flow of health data between patients and healthcare professionals within member states, the HealthData@EU,faciliates secondary use of data which allows policy makers,researchers access to health data to foster research and innovation.[1] The EUHDS aims to provide a trustworthy system to access and process health data and builds up from the General Data Protection Regulation (GDPR), proposed Data Governance Act.[2]
India’s health data policies:
The last few years has seen a flurry of health policies and documents being published and the creation of a framework for the evolution of a National Digital Health Ecosystem (NDHE). The components for this ecosystem were the National Digital Health Blueprint published in 2019 (NDHB) and the National Digital Health Mission (NDHM). The BluePrint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs.[3] Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.[4]
The National Health Authority (NHA) established in 2018 has been given the responsibility of implementing the National Digital Health Mission. 2018 also saw the Digital Information Security in Healthcare Act (DISHA) which was to be a legislation that laid down provisions that regulate the generation, collection, access, storage, transmission and use of Digital Health Data ("DHD") and associated personal data.[5] However since its call for public consultation no progress has been made on this front.
Along with these three strategy documents the NHA has also released policy documents more particularly the Health Data Management Policy (which was revised three times; the latest version released in April 2022), the Health Data Retention Policy (released April 2021), and the Consultation Paper on Unified Health Interface (UHI) (released March 2021). Along with this in 2022 the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines published in August 2022 did not even refer to the draft National Digital Health Data Management Policy (published in April 2022). As stated through the examples above these documents do not cross-refer or mention preceding health data documents, creating a lack of clarity of which documents are being used as guidelines by health care providers.
In addition to this the Personal Data Protection Bill has been revised three times since its release in 2018. The latest version was published for public comments on November 18, 2022; the Bill has removed the distinction between sensitive personal data and personal data and clubbed all personal data under one umbrella heading of personal data. Health and health data definition has also been deleted; creating further uncertainty with respect to health data as the different policies mentioned above rely on the data protection legislation to define health data.
Comparison of the Health Data Management Approaches
Interoperability with Data Protection Legislations
At the outset the key difference between the EU and India’s health data management policies has been the legal backing of GDPR which the EUHDS has. EUHDS has a strong base in terms of rules for privacy and data protection as it follows, draws inference and works in tandem with the General Data Protection Regulation (GDPR). The provisions also build upon legislation such as Medical Devices Regulation and the In Vitro Diagnostics Regulation. With particular respect to GDPR the EUHDS draws from the rights set out for protection of personal data including that of electronic health data.
The Indian Health data policies however currently exist in the vacuum created by the multiple versions of the Data Protection Bill that are published and repealed or replaced. The current version called the Digital Personal Data Protection Bill 2022 seems to take a step backward in terms of health data. The current version does away with sensitive personal data (which health data was a part of) and keeps only one category of data - personal data. It can be construed that the Bill currently considers all personal data as needing the same level of protection but it is not so in practice. The Bill does not at the moment mandate more responsibilities on data fiduciaries[6] that deal with health data (something that was present in all the earlier versions of the Bill) and in other data protection legislation across different jurisdictions and leaves the creation of Significant Data Fiduciaries (who have more responsibilities) to be created by rules, based on the sensitivity of data decided by the government at a later date.[7] In addition to this the Bill does not define “health data”, the reason why this is a cause for worry is that the existing health data policies also do not define health data often relying on the definition mentioned in the versions of Data Protection Bill.
Definitions and Scope
The EUHDS defines ‘personal electronic health data’ as data concerning health and genetic data as defined in Regulation (EU) 2016/679[8], as well as data referring to determinants of health, or data processed in relation to the provision of healthcare services, processed in an electronic form. Health data by these parameters would then include not just data about the status of health of a person which includes reports and diagnosis, but also data from medical devices.
In India the Health Data Management Policy 2022, defines “Personal Health Records” (PHR) as a health record that is initiated and maintained by an individual. The policy also states that a PHR would be able to reveal a complete and accurate summary of the health and medical history of an individual by gathering data from multiple sources and making this accessible online. However there is no definition of health data which can be used by companies or users to know what comes under health data. The 2018, 2019 and 2021 version of the Data Protection Legislation had definitions of the term health data, however the 2022 version of the Bill does away with the definition.
Health data and wearable devices
One of the forward looking provisions in the EUHDS is the inclusion of devices that records health data into this legislation. This also includes the requirement of them to be added to registries to provide easy access and scrutiny. The document also requires voluntary labeling of wellness applications and registration of EHR systems and wellness applications. This is not just for the regulation point of view but also in the case of data portability, in order for people to control the data they share. In addition to this in the case where manufacturers of medical devices and high-risk AI systems declare interoperability with the EHR systems, they will need to comply with the essential requirements on interoperability under the EHDS.
In India the health data management policy 2022 while stating the applicable entities and individuals who are part of the ABDM ecosystem[9] mention medical device manufacturers, does not mention device sellers or use terms such as wellness applications or wearable devices. Currently the regulation of medical devices falls under the purview of the Drugs and Cosmetics Act, 1940 (DCA) read along with the Medical Device Rules, 2017 (MDR). However in 2020 possibly due to the pandemic the Indian Government along with the Drugs Technical Advisory Board (DTAB) issued two notifications the first one expanded the scope of medical devices which earlier was limited to only 37 categories excluding medical apps, and second one notified the Medical Device (Amendment) Rules, 2020. These two changes together brought all medical devices under the DCA as well as expanded the categories of medical devices. However it is still unclear whether fitness tracker apps that come with devices are regulated, as the rules and the DCA still rely on the manufacturer to self-identify as a medical device.[10] However, this regulatory uncertainty has not brought about any change in how this data is being used and insurance companies at times encourage people to sync their fitness tracker data.[11]
Multiple use of health data
The EUHDS states two types of uses of data: primary and secondary use of data. In the document the EU states that while there are a number of organisations collecting data, this data is not made available for purposes other than for which it was collected. In order to ensure that researchers, innovators and policy makers can use this data. the EU encourages the data holders to contribute to this effort in making different categories of electronic health data they are holding available for secondary use. The data that can be used for secondary use would also include user generated data such as from devices, applications or other wearables and digital health applications.However, the regulation cautions against using this data for measures and making decisions that are detrimental to the individual, in ways such as increasing insurance premiums. The EUHDS also states that as the data is sensitive personal data care should be taken by the data access bodies, to ensure that while data is being shared it is necessary to ensure that the data will be processed in a privacy preserving manner. This could include through pseudonymisation, anonymisation, generalisation, suppression and randomisation of personal data.
While the document states how important it is to have secondary use of the data for public health, research and innovation it also requires that the data is not provided without adequate checks. The EUHDS requires the organisation seeking access to provide several pieces of information and be evaluated by the data access body. The information should include legitimate interest, the necessity and the process the data will go through. In the case where the organisation is seeking pseudonymised data, there is a need to explain why anonymous data would not be sufficient. In order to ensure a comprehensive approach between health data access bodies, the EUHDS states that the European Commission should support the harmonisation of data application, as well as data request.
In India, while multiple health data documents state the need to share data for public interest, research and innovation, not much thought has been given to ensuring that the data is not misused and that there is harmonisation between bodies that provide the data. Most recently the PMJay documents states that the NHA shall make aggregated and anonymised data available through a public dashboard for the purpose of facilitating health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NHA. Such data can be accessed through a request to the Data Sharing Committee[12] for the sharing of such information through secure modes, including clean rooms and other such secure modes specified by NHA. However the document does not mention what clean rooms are in this context.
The Health Data Management Policy 2022 states that Data fiduciaries (data controllers/ processors according to the data protection legislation) can themselves make anonymised or de-identified data in an aggregated form available based in technical processes and anonymisation protocols which may be specified by the NDHM in consultation with the MeitY. The purposes mentioned in this policy included health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NDHMP. The policy states that in order to access the anonymised or de-identified data the entity requesting the data would have to provide relevant information such as name, purpose of use and nodal person of contact details. While the policy does not go into details about the scrutiny of the organisations seeking this data, it does state that the data will be provided based on the term as may be stipulated.
However the issue arises as both the documents published by the NHA do not have a similar process for getting the data, for example the NDHMP requires the data fiduciary to share the data directly, while the PMJay guidelines requires the data to be shared by the Data Sharing Committee, creating duplicate datasets as well as affecting the quality of the data being shared.
Recommendations for India
Need for a data protection legislation:
While the EUHDS is still a draft document and the end result could be different based on the consultations and deliberations, the document has a strong base with respect to the privacy and data protection based on the earlier regulations and the GDPR. The definitions of what counts as health data, and the parameters for managing the data creates a more streamlined process for all stakeholders. More importantly the GDPR and other regulations provide a way of recourse for people. In India the health data related policies and strategy documents have been published and enforced before the data protection legislation is passed. In addition to this India, unlike the EU has just begun looking at a universal health ID and digitisation of the healthcare system, ideally it would be better to take each step at a time, and at first look at the issues that may arise due to the universal health ID. In addition to this, multiple policies, without a strong data protection legislation providing parameters and definitions could mean that the health data management policies only benefit certain people. This also creates uncertainty in terms of where an individual will go in case of harms caused by the processing of their data, and who would be the authority to govern questions around health data. The division of health data management between different documents also creates multiple silos of data management which creates data duplication and issues with data quality.
Secondary use of data
While both the EUHDS and India's Health Data Management Policy look at the sharing of health data with researchers and private organisations in order to foster innovation, the division of sharing of data based on who uses the data is a good way to ensure that only interested parties have access to the data. With respect to the health data policies in India, a number of policies talk about the sharing of anonymised data with researchers, however the documents being scattered could cause the same data to be shared by multiple health data entities, making it possible to identify people. For example, the health data management policy could share anonymised data of health services used by a person, whereas the PMJAY policy could share data about insurance covers, and the researcher could probably match the data and be closer to identifying people. It has also been revealed in multiple studies that anonymisation of data is not permanent and that the anonymisation can be broken. This is more concerning since the polices do not put limits or checks on who the researchers are and what is the end goal of the data sought by them, the policies seem to rely on the anonymisation of the data as the only check for privacy. This data could be used to de-anonymise people, could be used by companies working with the researchers to get large amounts of data to train their systems,
train data that could lead to greater surveillance, increase insurance scrutiny etc. The NHA and Indian health policy makers could look at the restrictions and checks that the EUHDS creates for the secondary use of data and create systems of checks and categories of researchers and organisations seeking data to ensure minimal risks to an individual’s data.
Conclusion
While the EU Health data space has been criticised for facilitating vast amounts of data with private companies and the collecting of data by governments, the codification of the legislation does in some way give some way to regulate the flow of health data. While India does not have to emulate the EU and have a similar document, it could look at the best practices and issues that are being highlighted with the EUHDS. Indian lawmakers have looked at the GDPR for guidance for the draft data protection legislation, similarly it could do so with regard to health data and health data management. One possible way to ensure both the free flow of health data and the safeguards of a regulation could be to re-introduce the DISHA Act which much like the EUHDS could act as a legislation which provides an anchor to the multiple health data policies, including standard definition of health data, grievance redressal bodies, and adjudicating authorities and their functions. In addition a legislation dedicated to the health data would also remove the existing burden on the to be formed data protection authority.
[1] “European Health Data Space”, European Commission, 03 May 2022,https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en
[2]“European Health Data Space”
[3] “National Digital Health Blueprint”, Ministry of Health and Family Welfare Government of India, https://abdm.gov.in:8081/uploads/ndhb_1_56ec695bc8.pdf
[4] “National Digital Health Blueprint”
[5] “Mondaq” “DISHA – India's Probable Response To The Law On Protection Of Digital Health Data” accessed 13 June 2023,https://www.mondaq.com/india/healthcare/1059266/disha-india39s-probable-response-to-the-law-on-protection-of-digital-health-data
[6]“The Digital Personal Data Protection Bill 2022”, accessed 13 June 2023 , https://www.meity.gov.in/writereaddata/files/The%20Digital%20Personal%20Data%20Potection%20Bill%2C%202022_0.pdf
[7]The Digital Personal Data Protection Bill 2022
[8] Regulation (EU) 2016/679 defines health data as “Personal data concerning health should include all data pertaining to the health status of a data subject which reveal information relating to the past, current or future physical or mental health status of the data subject. This includes information about the natural person collected in the course of the registration for, or the provision of, health care services as referred to in Directive 2011/24/EU of the European Parliament and of the Council (1) to that natural person; a number, symbol or particular assigned to a natural person to uniquely identify the natural person for health purposes; information derived from the testing or examination of a body part or bodily substance, including from genetic data and biological samples; and any information on, for example, a disease, disability, disease risk, medical history, clinical treatment or the physiological or biomedical state of the data subject independent of its source, for example from a physician or other health professional, a hospital, a medical device or an in vitro diagnostic test.
[9] For creating an integrated, uniform and interoperable ecosystem in a patient or individual centric manner, all the government healthcare facilities and programs, in a gradual/phased manner, should start assigning the same number for providing any benefit to individuals.
[10] For example a manufacturer of a fitness tracker which is capable of monitoring heart rate could state that the intended purpose of the device was fitness or wellness as opposed to early detection of heart disease thereby not falling under the purview of the regulation.
[11]“Healthcare Executive” “GOQii Launches GOQii Smart Vital 2.0, an ECG-Enabled Smart Watch with Integrated Outcome based Health Insurance & Life Insurance, accessed 13 June 2023
https://www.healthcareexecutive.in/blog/ecg-enabled-smart-watch
[12] The guidelines only state that the Committee will be responsible for ensuring the compliance of the guidelines in relation to the personal data under its control. And does not go into details of defining the Committee.
Deceptive Design in Voice Interfaces: Impact on Inclusivity, Accessibility, and Privacy
The original blog post can be accessed here.
Introduction
Voice Interfaces (VIs) have come a long way in recent years and are easily available as inbuilt technology with smartphones, downloadable applications, or standalone devices. In line with growing mobile and internet connectivity, there is now an increasing interest in India in internet-based multilingual VIs which have the potential to enable people to access services that were earlier restricted by language (primarily English) and interface (text-based systems). This current interest has seen even global voice applications such as Google Home and Amazon’s Alexa being available in Hindi (Singal, 2019) as well as the growth of multilingual voice bots for certain banks, hotels, and hospitals (Mohandas, 2022).
The design of VIs can have a significant impact on the behavior of the people using them. Deceptive design patterns or design practices that trick people into taking actions they might otherwise not take (Tech Policy Design Lab, n.d.), have gradually become pervasive in most digital products and services. Their use in visual interfaces has been widely criticized by researchers (Narayanan, Mathur, Chetty, and Kshirsagar, 2020), along with recent policy interventions (Schroeder and Lützow-Holm Myrstad, 2022) as well. As VIs become more relevant and mainstream, it is critical to anticipate and address the use of deceptive design patterns in them. This article, based on our learnings from the study of VIs in India, examines the various types of deceptive design patterns in VIs and focuses on their implications in terms of linguistic barriers, accessibility, and privacy.
Potential deceptive design patterns in VIs
Our research findings suggest that VIs in India are still a long way off from being inclusive, accessible and privacy-preserving. While there has been some development in multilingual VIs in India, their compatibility has been limited to a few Indian languages (Mohandas, 2022) (Naidu, 2022)., The potential of VIs as a tool for people with vision loss and certain cognitive disabilities such as dyslexia is widely recognized (Pradhan, Mehta, and Findlater, 2018), but our conversations suggest that most developers and designers do not consider accessibility when conceptualizing a voice-based product, which leads to interfaces that do not understand non standard speech patterns, or have only text-based privacy policies (Mohandas, 2022). Inaccessible privacy policies full of legal jargon along with the lack of regulations specific to VIs, also make people vulnerable to privacy risks.
Deceptive design patterns can be used by companies to further these gaps in VIs. As with visual interfaces, the affordances and attributes of VI can determine the way in which they can be used to manipulate behavior. Kentrell Owens, et.al in their recent research lay down six unique properties of VIs that may be used to implement deceptive design patterns (Owens, Gunawan, Choffnes, Emami-Naeini, Kohno, and Roesner, 2022). Expanding upon these properties, and drawing from our research, we look at how they can be exacerbated in India.
Making processes cumbersome
VIs are often limited by their inability to share large amounts of information through voice. They thus operate in combination with a smartphone app or a website. This can be intentionally used by platforms to make processes such as changing privacy settings or accessing the full privacy notice inconvenient for people to carry out. In India, this is experienced while unsubscribing from services such as Amazon Prime (Owens et al., 2022). Amazon Echo Dot presently allows individuals to subscribe to an Amazon Prime membership using a voice command, but directs them to use the website in order to unsubscribe from the membership. This can also manifest in the form of canceling orders and changing privacy settings.
VIs follow a predetermined linear structure that ensures a tightly controlled interaction. People make decisions based on the information they are provided with at various steps. Changing their decision or switching contexts could involve going back several steps. People may accept undesirable actions from the VI in order to avoid this added effort (Owens et al., 2022). The urgency to make decisions on each step can also cause people to make unfavorable choices such as allowing consent to third party apps. The VI may prompt advertisements and push for the company’s preferred services in this controlled conversation structure, which the user cannot side-step. For example, while setting up the Google voice assistant on any device, it nudges people to sign into their Google account. This means the voice assistant gets access to their web and app activity and location history at this step. While the data management of Google accounts can be tweaked through the settings, it may get skipped during a linear set-up structure. Voice assistants can also push people to opt into features such as ads personalisation, default news sources, and location tracking.
Making options difficult to find
Discoverability is another challenge for VIs. This means that people might find it difficult to discover available actions or options using just voice commands. This gap can be misused by companies to trick people into making undesirable choices. For instance, while purchasing items, the VI may suggest products that have been sponsored and not share full information on other cheaper products, forcing people to choose without complete knowledge of their options. Many mobile based voice apps in India use a combination of images or icons with the voice prompts to enable discoverability of options and potential actions, which excludes people with vision loss (Naidu, 2022). These apps comprise a voice layer added to an otherwise touch-based visual platform so that people are able to understand and navigate through all available options using the visual interface, and use voice only for purposes such as searching or narrating. This means that these apps cannot be used through voice alone, making them disadvantageous for people with vision loss.
Discreet integration with third parties
VIs can use the same voice for varying contexts. In the case of Alexa, Skills, which are apps on its platform, have the same voice output and invocation phrases as its own in-built features. End users find it difficult to differentiate between an interaction with Amazon and that with Skills which are third-party applications. This can cause users to share information that they otherwise would not have with third parties (Mozilla Foundation, 2022). There are numerous Amazon Skills inHindi and people might not be aware that the developers of these Skills are not vetted by Amazon. This misunderstanding can create significant privacy or security risks if Skills are linked to contacts, banking, or social media accounts.
Lack of language inclusivity
The lack of local language support, colloquial translations, and accents can lead to individuals not receiving clear and complete information. VI’s failure to understand certain accents can also make people feel isolated (Harwell, 2018). While in India voice assistants and even voice bots are available in few Indic languages, the default initial setup, privacy policies, and terms and conditions are still in English. The translated policies also use literary language which is difficult for people to understand, and miss out on colloquial terms. This could mean that the person might have not fully understood these notices and hence not have given informed consent. Such use of unclear language and unavailability of information in Indic languages can be viewed as a deceptive design pattern.
Making certain choices more apparent
The different dimensions of voice such as volume, pitch, rate, fluency, pronunciation, articulation, and emphasis can be controlled and manipulated to implement deceptive design patterns. VIs may present the more privacy-invasive options more loudly or clearly, and the more privacy-preserving options more softly or quickly. It can use tone modulations to shame people into making a specific choice (Owens et al., 2022). For example, media streaming platforms may ask people to subscribe for a premium account to avoid ads in normal volume and mention the option to keep ads in a lower volume. Companies have also been observed to discreetly integrate product advertisements in voice assistants using tone. SKIN, a neurotargeting advertising strategy business, used a change of tone of the voice assistant to suggest a dry throat to advertise a drink (Chatellier, Delcroix, Hary, and Girard-Chanudet, 2019).
The attribution of gender, race, class, and age through stereotyping can create a persona of the VI for the user. This can extend to personality traits, such as an extroverted or an introverted, docile or aggressive character (Simone, 2020). The default use of female voices with a friendly and polite persona for voice assistants has drawn criticism for perpetuating harmful gender stereotypes (Cambre and Kulkarni, 2019). Although there is an option to change the wake word “Alexa” in Amazon’s devices, certain devices and third party apps do not work with another wake word (Ard, 2021). Further, projection of demographics can also be used to employ deceptive design patterns. For example, a VI persona that is constructed to create a perception of intelligence, reliability, and credibility can have a stronger influence on people’s decisions. Additionally, the effort to make voice assistants as human sounding as possible without letting people know they are human, could create a number of issues (X. Chen and Metz, 2019). First time users might divulge sensitive information thinking that they are interacting with a person. This becomes more ethically challenging when persons with vision loss are not able to know who they are interacting with.
Recording without notification
Owens et al speak about VIs occupying physical domains due to which they have a much wider impact as opposed to a visual interface (Owens et al., 2022). The always-on nature of virtual assistants could result in personal information of a guest being recorded without their knowledge or consent as consent is only given at the setup stage by the owner of the device or smartphone.
Making personalization more convenient through data collection
VIs are trained to adapt to the experience and expertise of the user. Virtual assistants provide personalization and the possibility to download a number of skills, save payment information, and phone contacts. In order to facilitate differentiation between multiple users on the same VI, individuals talking to the device are profiled based on their speech patterns and/or voice biometrics. This also helps in controlling or restricting content for children (Naidu, 2022). There is also tracking of commands to identify and list their intent for future use. The increase of specific and verified data can be used to provide better targeted advertisements, as well possibly be shared with law enforcement agencies in certain cases. Recently, a payment gateway company was made to share customer information to the law enforcement without their customer’s knowledge. This included not just the information about the client but also revealed sensitive personal data of the people who had used the gateway for transactions to the customer. While providing such details are not illegal and companies are meant to comply with requests from law enforcement, if more people knew of the possibility of every conversation of the house being accessible to law enforcement they would make more informed choices of what the VI records.
Reducing friction in actions desired by the platform
One of the fundamental advantages of VIs is that it can reduce several steps to perform an action using a single command. While this is helpful to people interacting with it, the feature can also be used to reduce friction from actions that the platform wants them to take. These actions could include sharing sensitive information, providing consent to further data sharing, and making purchases. An example of this can be seen where children have found it very easy to purchase items using Alexa (BILD, 2019).
Recommendations for Designers and Policymakers
Through these deceptive design patterns, VIs can obstruct and control information according to the preferences of the platform. This can result in a heightened impact on people with less experience with technology. Presently, profitability is a key driving factor for development and design of VI products. There is more importance given to data-based and technical approaches, and interfaces are often conceptualized by people with technical expertise with lack of inputs from designers at the early stages (Naidu, 2022). Designers also focus more on the usability and functionality of the interfaces by enabling personalization, but are often not as sensitive to safeguarding the rights of individuals using them. In order to tackle deceptive design, designers must work towards prioritizing ethical practice, and building in more agency and control for people who use VIs.
Many of the potential deceptive design patterns can be addressed by designing for accessibility and inclusivity in a privacy preserving manner. This includes vetting third-party apps, providing opt-outs, and clearly communicating privacy notices. Privacy implications can also be prompted by the interface at the time of taking actions. There should be clear notice mechanisms such as a prominent visual cue to alert people when a device is on and recording, along with an easy way to turn off the ‘always listening’ mode. The use of different voice outputs for third party apps can also signal to people about who they are interacting with and what information they would like to share in that context.
Training data that covers a diverse population should be built for more inclusivity. A linear and time-efficient architecture is helpful for people with cognitive disabilities. But, this linearity can be offset by adding conversational markers that let the individual know where they are in the conversation (Pearl, 2016). This could address discoverability as well, allowing people to easily switch between different steps. Speech-only interactions can also allow people with vision loss to access the interface with clarity.
A number of policy documents including the 2019 version of India’s Personal Data Protection Bill, emphasize on the need for privacy by design. But, they do not mention how deceptive design practices could be identified and avoided, or prescribe penalties for using these practices (Naidu, Sheshadri, Mohandas, and Bidare, 2020). In the case of VI particularly, there is a need to look at it as biometric data that is being collected and have related regulations in place to prevent harm to users. In terms of accessibility as well, there could be policies that require not just websites but also apps (including voice based apps) to be compliant with international accessibility guidelines , and to conduct regular audits to ensure that the apps are meeting the accessibility threshold.
Detecting Encrypted Client Hello (ECH) Blocking
This blogpost was edited by Torsha Sarkar.
The Transport Layer Security (TLS) protocol, which is widely recognised as the lock sign in a web browser’s URL bar, encrypts the contents of internet connections when an internet user visits a website so that network intermediaries (such as Internet Service Providers, Internet Exchanges, undersea cable operators, etc.) cannot view the private information being exchanged with the website.
TLS, however, suffers from a privacy issue – the protocol transmits a piece of information known as the Server Name Indication (or SNI) which contains the name of the website a user is visiting. While the purpose of TLS is to encrypt private information, the SNI remains unencrypted – leaking the names of the websites internet users visit to network intermediaries, who use this metadata to surveil internet users and censor access to certain websites. In India, two large internet service providers – Reliance Jio and Bharti Airtel – have been previously found using the SNI field to block access to websites.
Encrypted Client Hello (or ECH) is a new internet protocol that has been under development since 2018 at the Internet Engineering Task Force (IETF) and is now being tested for a small percentage of internet users before a wider rollout. It seeks to address this privacy limitation by encrypting the SNI information that leaks the names of visited websites to internet intermediaries. The ECH protocol significantly raises the bar for censors – the SNI is the last bit of unencrypted metadata in internet connections that censors can reliably use to detect which websites an internet user is visiting. After this protocol is deployed, censors will find it harder to block websites by interfering with network connections and will be forced to utilise blocking methods such as website fingerprinting and man-in-the-middle attacks that are either expensive and less accurate, or unfeasible in most cases.
We have been tracking the development of this privacy enhancement. To assist the successful deployment of the ECH protocol, we contributed a new censorship test to the Open Observatory for Network Interference (OONI) late last year. The new test attempts to connect to websites using the ECH protocol and records any interference from censors to the connection. As censors in some countries were found blocking a previous version of the protocol entirely, this test gives important early feedback to the protocol developers on whether censors are able to detect and block the protocol.
We conducted ECH tests during the first week of September 2023 from four popular Indian ISPs, namely Airtel, Atria Convergence Technologies (ACT), Reliance Jio, and Vodafone Idea, which account for around 95% of the Indian internet subscriber base. The results indicated that ECH connections to a popular website were successful and are not currently being blocked. This was the expected result, as the protocol is still under development. We will continue to monitor for interference from censors closer to the time of completion of the protocol to ensure that this privacy enhancing protocol is successfully deployed.
Digital Delivery and Data System for Farmer Income Support
Executive Summary
This study provides an in-depth analysis of two direct cash transfer schemes in India – Krushak Assistance for Livelihood and Income Augmentation (KALIA) and Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) – which aim to provide income support to farmers. The paper examines the role of data systems in the delivery and transfer of funds to the beneficiaries of these schemes, and analyses their technological framework and processes.
We find that the use of digital technologies, such as direct benefit transfer (DBT) systems, can improve the efficiency and ensure timely transfer of funds. However, we observe that the technology-only system is not designed with the last beneficiaries in mind; these people not only have no or minimal digital literacy but are also faced with a lack of technological infrastructure, including internet connectivity and access to the system that is largely digital.
Necessary processes need to be implemented and personnel on the ground enhanced in the existing system, to promptly address the grievances of farmers and other challenges.
This study critically analyses the direct cash transfer scheme and its impact on the beneficiaries. We find that despite the benefits of direct benefit transfer (DBT) systems, there have been many instances of failures, such as the exclusion of several eligible households from the database.
The study also looks at gender as one of the components shaping the impact of digitisation on beneficiaries. We also identify infrastructural and policy constraints, in sync with the technological framework adopted and implemented, that impact the implementation of digital systems for the delivery of welfare. These include a lack of reliable internet connectivity in rural areas and low digital literacy among farmers. We analyse policy frameworks at the central and state levels and find discrepancies between the discourse of these schemes and their implementation on the ground.
We conclude the study by discussing the implications of datafication, which is the process of collecting, analysing, and managing data through the lens of data justice. Datafication can play a crucial role in improving the efficiency and transparency of income support schemes for farmers. However, it is important to ensure that the interests of primary beneficiaries are considered – the system should work as an enabling, not a disabling, factor. This appears to be the case in many instances since the current system does not give primacy to the interests of farmers. We offer recommendations for policymakers and other stakeholders to strengthen these schemes and improve the welfare of farmers and end users.
DoT’s order to trace server IP addresses will lead to unintended censorship
This post was reviewed and edited by Isha Suri and Nishant Shankar.
In December 2023, the Department of Telecommunications (DoT) issued instructions to internet service providers (ISPs) to maintain and share a list of “customer owned” IP addresses that host internet services through Indian ISPs so that they can be immediately traced in case “they are required to be blocked as per orders of [the court], etc”.
For the purposes of the notification, tracing customer-owned IP addresses implies identifying the network location of a subset of web services that possess their own IP addresses, as opposed to renting them from the ISP. These web services purchase IP Transit from Indian ISPs in order to connect their servers to the internet. In such cases, it is not immediately apparent which ISP routes to a particular IP address, requiring some amount of manual tracing to locate the host and immediately cut off access to the service. The order notes that “It has been observed that many times it is time consuming to trace location of such servers specially in case the IP address of servers is customer owned and not allocated by the Licensed Internet Service Provider”.
This indicates that, not only is the DoT blocking access to web services based on their IP addresses, but is doing so often enough for manual tracing of IP addresses to be a time consuming process for them.
While our legal framework allows courts and the government to issue content takedown orders, it is well documented that blocking web services based on their IP addresses is ineffectual and disruptive. An explainer on content blocking by the Internet Society notes, “Generally, IP blocking is a poor filtering technique that is not very effective, is difficult to maintain effectively, has a high level of unintended additional blockage, and is easily evaded by publishers who move content to new servers (with new IP addresses)”. The practice of virtual hosting is very common on the internet, which entails that a single web service can span multiple IP addresses and a single IP address can be shared by hundreds, or even thousands, of web services. Blocking access to a particular IP address can cause unrelated web services to fail in subtle and unpredictable ways, leading to collateral censorship. For example, a 2022 Austrian court order to block 11 IP addresses associated with 14 websites that engaged in copyright infringement rendered thousands of unrelated websites inaccessible.
The unintended effects of IP blocking have also been observed in practice in India. In 2021, US-based OneSignal Inc. approached the Delhi High Court challenging the blockage of one of its IP addresses by ISPs in India. With OneSignal being an online marketing company, there did not appear to be any legitimate reason for it to be blocked. In response to the petition the Government said that they had already issued unblocking orders for the IP address. There have also been numerous reports by internet users of inexplicable blocking of innocuous websites hosted on content delivery networks (which are known to often share IP addresses between customers).
We urge the ISPs, government departments and courts issuing and implementing website blocking orders to refrain from utilising overly broad censorship mechanisms like IP blocking which can lead to failure of unrelated services on the internet.
Information Disorders and their Regulation
In the last few years, ‘fake news’ has garnered interest across the political spectrum, as affiliates of both the ruling party and its opposition have seemingly partaken in its proliferation. The COVID-19 pandemic added to this phenomenon, allowing for xenophobic, communal narratives, and false information about health-protective behaviour to flourish, all with potentially deadly effects. This report maps and analyses the government’s regulatory approach to information disorders in India and makes suggestions for how to respond to the issue.
In this study, we gathered information by scouring general search engines, legal databases, and crime statistics databases to cull out data on a) regulations, notifications, ordinances, judgments, tender documents, and any other legal and quasi-legal materials that have attempted to regulate ‘fake news’ in any format; and b) news reports and accounts of arrests made for allegedly spreading ‘fake news’. Analysing this data allows us to determine the flaws and scope for misuse in the existing system. It also gives us a sense of the challenges associated with regulating this increasingly complicated issue while trying to avoid the pitfalls of the present system.
Click to download the full report here.
Reconfiguring Data Governance: Insights from India and the EU
The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.
This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.
This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet & Society, India.
Click to download the report
India’s parental control directive and the need to improve stalkerware detection
This post was reviewed and edited by Amrita Sengupta.
Stalkerware is a form of surveillance targeted primarily at partners, employees and children in abusive relationships. These are software tools that enable abusers to spy on a person’s mobile device, allowing them to remotely access all data on the device, including calls, messages, photos, location history, browsing history, app data, and more. Stalkerware apps run hidden in the background without the knowledge or consent of the person being surveilled.[1] Such applications are easily available online and can be installed by anyone with little technical know-how and physical access to the device.
News reports indicate that the Ministry of Electronics and Information Technology (MeitY) is supporting the development of an app called “SafeNet”[2] that allows parents to monitor activity and set content filters on children’s devices. Following a directive from the Prime Minister’s office to “incorporate parental controls in data usage” by July 2024, the Internet Service Providers Association of India (ISPAI) has suggested that the app should come preloaded on mobile phones and personal computers sold in the country. The Department of Telecom is also asking schools to raise awareness about such parental control solutions.[3][4]
The beta version of the app is available for Android devices on the Google Play Store and advertises a range of functionalities including location access, monitoring website and app usage, call and SMS logs, screen time management and content filtering. The content filtering functionality warrants a separate analysis and this post will only focus on the surveillance capabilities of this app.
Applications like Safenet, that do not attempt to hide themselves and claim to operate with the knowledge of the person being surveilled, are sometimes referred to as “watchware”.[5] However, for all practical purposes, these apps are indistinguishable from stalkerware. They possess the same surveillance capabilities and can be deployed in the exact same ways. Such apps sometimes incorporate safeguards to notify users that their device is being monitored. These include persistent notifications on the device’s status bar or a visible app icon on the device’s home screen. However, such safeguards can be circumvented with little effort. The notifications can simply be turned off on some devices and there are third-party Android tools that allow app icons and notifications to be hidden from the device user, allowing watchware to be repurposed as stalkerware and operate secretly on a device. This leaves very little room for distinction between stalkerware and watchware apps.[6] In fact, the developers of stalkerware apps often advertise their tools as watchware, instructing users to only use them for legitimate purposes.
Even in cases where stalkerware applications are used in line with their stated purpose of monitoring minors’ internet usage, the effectiveness of a surveillance-centric approach is suspect. Our previous work on children’s privacy has questioned the treatment of all minors under the age of 18 as a homogenous group, arguing for a distinction between the internet usage of a 5-year-old child and a 17-year-old teenager. We argue that educating and empowering children to identify and report online harms is more effective than attempts to surveil them.[7][8] Most smartphones already come with options to enact parental controls on screen time and application usage[9][10], and the need for third-party applications with surveillance capabilities is not justified.
Studies and news reports show the increasing role of technology in intimate partner violence (IPV).[11][12] Interviews with IPV survivors and support professionals indicate an interplay of socio-technical factors, showing that abusers leverage the intimate nature of such relationships to gain access to accounts and devices to exert control over the victim. They also indicate the prevalence of “dual-use” apps such as child-monitoring and anti-theft apps that are repurposed by abusers to track victims.[13]
There is some data available that indicates the use of stalkerware apps in India. Kaspersky anti-virus’ annual State of Stalkerware reports consistently place India among the top 4 countries with the most number of infections detected by its product, with a few thousand infections reported each year between 2020 and 2023.[14][15][16[17] TechCrunch’s Spyware Lookup Tool, which compiles information from data leaks from more than nine stalkerware apps to notify victims, also identifies India as a hotspot for infections.[18] Avast, another antivirus provider, reported a 20% rise in the use of stalkerware apps during COVID-19 lockdowns.[19] The high rates of incidence of intimate partner violence in India, with the National Family Health Survey reporting that about a third of all married women aged 18–49 years have experienced spousal violence [20], also increases the risk of digitally-mediated abuse.
Survivors of digitally-mediated abuse often require specialised support in handling such cases to avoid alerting abusers and potential escalations. As part of our ongoing work on countering digital surveillance, we conducted an analysis of seven stalkerware applications, including two that are based in India, to understand and improve how survivors and support professionals can detect their presence on devices.
In some cases, where it is safe to operate the device, antivirus solutions can be of use. Antivirus tools can often identify the presence of stalkerware and watchware on a device, categorising them as a type of malware. We measured how effective various commercial antivirus solutions are at detecting stalkerware applications. Our results, which are detailed in the Appendix, indicate a reasonably good coverage, with six out of the seven apps being flagged as malicious by various antivirus solutions. We found that Safenet, the newest app on the list, was not detected by any antivirus. We also compared the detection results with a similar study conducted in 2019 [21] and found that some newer versions of previously known apps saw lower rates of detection. This indicates that antivirus solutions need to analyse new apps and newer versions of apps more frequently to improve coverage and understand how they are able to evade detection.
In cases where the device cannot be operated safely, support workers use specialised forensic tools such as the Mobile Verification Toolkit [22] and Tinycheck [23], which can be used to analyse devices without modifying them. We conducted malware analysis on the stalkerware apps to document the traces they leave on devices and submitted them to an online repository of indicators of compromise (IOCs).[24] These indicators are incorporated in detection tools used by experts to detect stalkerware infections.
Despite efforts to support survivors and stop the spread of stalkerware applications, the use of technology in abusive relationships continues to grow.[25] Making a surveillance tool like Safenet available for free, publicising it for widespread use, and potentially preloading it on mobile devices and personal computers sold in the country, is an ill-conceived way to enact parental controls and will lead to an increase in digitally-mediated abuse. The government should immediately take this application out of the public domain and work on developing alternate child protection policies that are not rooted in distrust and surveillance.
If you are affected by stalkerware there are some resources available here:
https://stopstalkerware.org/information-for-survivors/
https://stopstalkerware.org/resources/
Appendix
Our analysis covered two apps based in India, SafeNet and OneMonitar, and five other apps, Hoverwatch, TheTruthSpy, Cerberus, mSpy and FlexiSPY. All samples were directly obtained from the developer’s websites. The details of the samples are as follows:
Name |
File name |
Version |
Date sample was obtained |
SHA-1 Hash |
SafeNet |
Safenet_Child.apk |
0.15 |
16th March, 2024 |
d97a19dc2212112353ebd84299d49ccfe8869454 |
OneMonitar |
ss-kids.apk |
5.1.9 |
19th March, 2024 |
519e68ab75cd77ffb95d905c2fe0447af0c05bb2 |
Hoverwatch |
setup-p9a8.apk |
7.4.360 |
5th March, 2024 |
50bae562553d990ce3c364dc1ecf44b44f6af633 |
TheTruthSpy |
TheTruthSpy.apk |
23.24 |
5th March, 2024 |
8867ac8e2bce3223323f38bd889e468be7740eab |
Cerberus |
Cerberus_disguised.apk |
3.7.9 |
4th March, 2024 |
75ff89327503374358f8ea146cfa9054db09b7cb |
mSpy |
bt.apk |
7.6.0.1 |
21st March, 2024 |
f01f8964242f328e0bb507508015a379dba84c07 |
FlexiSPY |
5009_5.2.2_1361.apk |
5.2.2 |
26th March, 2024 |
5092ece94efdc2f76857101fe9f47ac855fb7a34 |
We analysed the network activity of these apps to check what web servers they send their data to. With increasing popularity of Content Delivery Networks (CDNs) and cloud infrastructure, these results may not always give us an accurate idea about where these apps originate, but can sometimes offer useful information:
Name | Domain | IP Address[26] | Country | ASN Name and Number |
SafeNet | safenet.family | 103.10.24.124 | India | Amrita Vishwa Vidyapeetham, AS58703 |
OneMonitar | onemonitar.com | 3.15.113.141 | United States | Amazon.com, Inc., AS16509 |
OneMonitar | api.cp.onemonitar.com | 3.23.25.254 | United States | Amazon.com, Inc., AS16509 |
Hoverwatch | hoverwatch.com | 104.236.73.120 | United States | DigitalOcean, LLC, AS14061 |
Hoverwatch | a.syncvch.com | 158.69.24.236 | Canada | OVH SAS, AS16276 |
TheTruthSpy | thetruthspy.com | 172.67.174.162 | United States | Cloudflare, Inc., AS13335 |
TheTruthSpy | protocol-a946.thetruthspy.com | 176.123.5.22 | Moldova | ALEXHOST SRL, AS200019 |
Cerberus | cerberusapp.com | 104.26.9.137 | United States | Cloudflare, Inc., AS13335 |
mSpy | mspy.com | 104.22.76.136 | United States | Cloudflare, Inc., AS13335 |
mSpy | mobile-gw.thd.cc | 104.26.4.141 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | flexispy.com | 104.26.9.173 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | djp.bz | 119.8.35.235 | Hong Kong | HUAWEI CLOUDS, AS136907 |
To understand whether commercial antivirus solutions are able to categorise stalkerware apps as malicious, we used a tool called VirusTotal, which aggregates checks from over 70 antivirus scanners.[27] We uploaded hashes (i.e. unique signatures) of each sample to VirusTotal and recorded the total number of detections by various antivirus solutions. We compared our results to a similar study by Citizen Lab in 2019 [28] that looked at a similar set of apps to identify changes in detection rates over time.
Product |
VirusTotal Detections (March 2024) |
VirusTotal Detections (January 2019) (By Citizen Lab) |
SafeNet [29] |
0/67 (0 %) |
N/A |
OneMonitar [30] |
17/65 (26.1%) |
N/A |
Hoverwatch |
24/58 (41.4%) |
22/59 (37.3%) |
TheTruthSpy |
38/66 (57.6%) |
0 |
Cerberus |
8/62 (12.9%) |
6/63 (9.5%) |
mSpy |
8/63 (12.7%) |
20/63 (31.7%) |
Flexispy [31] |
18/66 (27.3%) |
34/63 (54.0%) |
We also checked if Google’s Play Protect service [32], a malware detection tool that is built-in to Android devices using Google’s Play Store. These results were also compared with similar checks performed by Citizen Lab in 2019.
Product |
Detected by Play Protect (March 2024) |
Detected by Play Protect (January 2019) (By Citizen Lab) |
SafeNet |
no |
N/A |
OneMonitar |
yes |
N/A |
Hoverwatch |
yes |
yes |
TheTruthSpy |
yes |
yes |
Cerberus |
yes |
no |
mSpy |
yes |
yes |
Flexispy |
yes |
yes |
Endnotes
1. Definition adapted from Coalition Against Stalkerware, https://stopstalkerware.org/
2. https://web.archive.org/web/20240316060649/https://safenet.family/
5. https://github.com/AssoEchap/stalkerware-indicators/blob/master/README.md
6. https://cybernews.com/privacy/difference-between-parenting-apps-and-stalkerware/
7. https://timesofindia.indiatimes.com/blogs/voices/shepherding-children-in-the-digital-age/
8. https://blog.avast.com/stalkerware-and-children-avast
9. https://safety.google/families/parental-supervision/
10. https://support.apple.com/en-in/105121
11. R. Chatterjee et al., "The Spyware Used in Intimate Partner Violence," 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 441-458.
13. D. Freed et al., "Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders", PACM: Human-Computer Interaction: Computer-Supported Cooperative Work and Social Computing (CSCW), vol. 1, no. 2, 2017.
18. https://techcrunch.com/pages/thetruthspy-investigation/
19. https://www.thenewsminute.com/atom/avast-finds-20-rise-use-spying-and-stalkerware-apps-india-during-lockdown-129155
20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10071919/
21. https://citizenlab.ca/docs/stalkerware-holistic.pdf
22. https://docs.mvt.re/en/latest/
23. https://tiny-check.com/
24. https://github.com/AssoEchap/stalkerware-indicators/pull/125
25. https://stopstalkerware.org/2023/05/15/report-shows-stalkerware-is-not-declining/
26. IP information provided by https://ipinfo.io/
27. https://docs.virustotal.com/docs/how-it-works
28. https://citizenlab.ca/docs/stalkerware-holistic.pdf
29. Sample was not known to VirusTotal, it was uploaded at the time of analysis
30. Sample was not known to VirusTotal, it was uploaded at the time of analysis
31. Sample was not known to VirusTotal, it was uploaded at the time of analysis
Consultation on Gendered Information Disorder in India
The event was convened by Amrita Sengupta (Research and Programme Lead, CIS), Yesha Tshering Paul (Researcher, CIS), Bishakha Datta (Programme Lead, POV) and Prarthana Mitra (Project Anchor, POV)..* Download the event report here.
The event brought together experts, researchers and grassroots activists from Maharashtra and across the country to discuss their experiences with information disorder, and the multifaceted challenges posed by misinformation, disinformation and malinformation targeting gender and sexual identities.
Understanding Information Disorders: The consultation commenced with a look at the wide spectrum of information disorder by Yesha Tshering Paul and Amrita Sengupta. Misinformation[1] was highlighted as false information disseminated unintentionally, such as inaccurate COVID cures that spread rapidly during the pandemic. In contrast, disinformation involves the intentional spread of false information to cause harm, exemplified by instances like deepfake pornography. A less recognized form, malinformation, involves the deliberate misuse of accurate information to cause harm, as seen in the misleading representation of regret rates among trans individuals who have undertaken gender affirming procedures. Yesha highlighted that the definitions of these concepts are often varied, and thus the importance of moving beyond definitions to centre user experiences of this phenomenon.
The central theme of this discussion was the concept of “gendered” information disorder, referring to the targeted dissemination of false or harmful online content based on gender and sexual identity. This form of digital misogyny intersects with other societal marginalizations, disproportionately affecting marginalised genders and sexualities. The session also emphasised the critical link between information disorders and gendered violence (both online and in real life). Such disorders perpetuate stereotypes, gender-based violence, and silences victims, fostering an environment that empowers perpetrators and undermines victims' experiences.
Feminist Digital Infrastructure: Digital infrastructures shape our online spaces. Sneha PP (Senior Researcher, CIS) introduced the concept of feminist infrastructures as a potential solution that helps mediate discourse around gender, sexuality, and feminism in the digital realm. Participant discussions emphasised the need for accessible, inclusive, and design-conscious digital infrastructures that consider the intersectionality and systemic inequalities impacting content creation and dissemination. Strategies were discussed to address online gender-based violence and misinformation, focusing on survivor-centric approaches and leveraging technology for storytelling.
Gendered Financial Mis-/Dis-information: Garima Agrawal (Researcher, CIS) with inputs by Debarati Das (Co-Lead, Capacity Building at PoV) and Chhaya Rajput (Helpline Facilitator, Tech Sakhi) led the session by highlighting gender disparities in digital and financial literacy and access to digital devices and financial services in India, despite women constituting a higher percentage of new internet users. This makes marginalised users more vulnerable to financial scams. Drawing from the ongoing financial harms project at CIS, Garima spoke about the diverse manifestations of financial information disorders arising from misleading information that results in financial harm, ranging from financial influencers (and in some cases deepfakes of celebrities) endorsing platforms they do not use, to fake or unregulated loan and investment services deceiving users. Breakout groups of participants then analysed several case studies of real-life financial frauds that targeted women and the queer community to identify instances of misinformation, disinformation and malinformation. Emotional manipulation and the exploitation of trust were identified as key tactics used to deceive victims, with repercussions extending beyond monetary loss to emotional, verbal, and even sexual violence against these individuals.
Fact-Checking Fake News and Stories: The pervasive issue of fake news in India was discussed in depth, especially in the era of widespread social media usage. Only 41% of Indians trust the veracity of the information encountered online. Aishwarya Varma, who works at Webqoof (The Quint’s fact checking initiative) as a Fact Check Correspondent, led an informative session detailing the various accessible tools that can be used to fact-check and debunk false information. Participants engaged in hands-on activities by using their smartphones for reverse image searches, emphasising the importance of verifying images and their sources. Archiving was identified as another crucial aspect to preserve accurate information and debunk misinformation.
Gendered Health Mis-/Dis-information: This participant-led discussion highlighted structural gender biases in healthcare and limited knowledge about mental health and menstrual health as significant concerns, along with the discrimination and social stigma faced by the LGBTQ+ community in healthcare facilities. One participant brought up their difficulty accessing sensitive and non-judgmental healthcare, and the insensitivity and mockery faced by them and other trans individuals in healthcare facilities. Participants suggested the increased need for government-funded campaigns on sexual and reproductive health rights and menstrual health, and the importance of involving marginalised communities in healthcare related decision-making to bring about meaningful change.
Mis-/Dis-information around Sex, Sexuality, and Sexual Orientation: Paromita Vohra, Founder and Creative Director of Agents of Ishq—a multi-media project about sex, love and desire that uses various artistic mediums to create informational material and an inclusive, positive space for different expressions of sex and sexuality—led this session. She started with an examination of the term “disorder” and its historical implications, and highlighted how religion, law, medicine, and psychiatry had previously led to the classification of homosexuality as a “disorder”. The session delved into the misconceptions surrounding sex and sexuality in India, advocating for a broader understanding that goes beyond colonial knowledge systems and standardised sex education. She brought up the role of media in altering perspectives on factual events, and the need for more initiatives like Agents of Ishq to address the need for culturally sensitive and inclusive sexuality language and education that considers diverse experiences, emotions, and identities.
Artificial Intelligence and Mis-/Dis-information: Padmini Ray Murray, Founder of Design Beku—a collective that emerged from a desire to explore how technology and design can be decolonial, local, and ethical— talked about the role of AI in amplifying information disorder and its ethical considerations, stemming from its biases in language representation and content generation. Hindi and regional Indian languages remain significantly under-represented in comparison to English content, leading to skewed AI-generated content. Search results reflect the gendered biases in AI and further perpetuate existing stereotypes and reinforce societal biases. She highlighted the real-world impacts of AI on critical decision-making processes such as loan approvals, and the influence of AI on public opinion via media and social platforms. Participants expressed concerns about the ethical considerations of AI, and emphasised the need for responsible AI development, clear policies, and collaborative efforts between tech experts, policymakers, and the public.
* The Centre for Internet and Society undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. Point of View focuses on sexuality, disability and technology to empower women and other marginalised genders to shape and inhabit digital spaces.
[1] Claire Wardle, Understanding Information Disorder (2020). https://firstdraftnews.org/long-form-article/understanding-information-disorder/.
Comments to the Draft Digital Competition Bill, 2024
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
At the outset, CIS affirms the Committee’s approach to transition from a predominantly ex-post to an ex-ante approach for regulating competition in digital markets. The Committee’s assessment of the ex-post regime being too time-consuming for the digital domain has been substantiated by frequent and expensive delays in antitrust disputes, a fact that has also recently drawn the attention of the Ministry of Corporate Affairs. And not just in India, the ex-post regime has been found to be too time-consuming in other jurisdictions as well, as a consequence of which many other countries are also moving towards an ex-post regime for digital markets. This also allows India to be in harmony with both developing and developed countries, which makes regulating global competition more consistent and efficient. In fact, “international cooperation between competition authorities” and “greater coherence between regulatory frameworks” are key in facilitating global investigations and lowering the cost of doing business.
Moreover, by adopting a principles-based approach to designing the law’s obligations, the draft Bill also addresses the concern that ex-ante regulations, due to their prescriptive nature, tend to be sector-agnostic. The fact that these principles are based on the findings of the Parliamentary Standing Committee’s (PSC) Report on ‘Anti-Competitive Practices by Big Tech Companies’ only lends them more evidence. The draft DCB empowers the Commission to clarify the Obligations for different services, and also provides CCI with the flexibility to undertake independent consultations to accommodate varying contexts and the needs of different core digital services. We do, however, have specific comments regarding implementing some of these provisions, which are elaborated in the accompanying document.
We would also like to emphasise that adequate enforcement of an ex-ante approach requires bolstering and strengthening regulatory capacity. Therefore, to minimise risks relating to underenforcement as well as overenforcement, CCI, its Digital Markets and Data Unit (DMDU), and the Director General’s (DG) office will have to substantially increase their technical capacity. A comparison of CCI’s current strength with its global counterparts that have adopted or are in the process of adopting an ex-ante approach to competition regulation reveals a stark picture. For example, the European Union (EU) had over 870 people in its DG COMP unit in 2022, and its DG CONNECT unit is expected to hire another 100 people in 2024 alone. Similarly, the United Kingdom’s Competition and Markets Authority (CMA) has a permanent staff of 800+, the Japan Fair Trade Commission (JTFC) has about 400 officials just for regulating anti-competitive conduct, and South Korea’s KFTC has about 600 employees. In contrast, CCI and DG, combined, have a sanctioned strength of only 195 posts, out of which 71 remain vacant. Bridging this capacity gap through frequent and high-quality recruitment is, therefore, the need of the hour. Most importantly, there is a need to create a culture of interdisciplinary coordination among legal, technical, and economic domains.
Moreover, as we come to rely on an increasingly digitised economy, most technology companies will work with critical technology components such as key infrastructure, algorithms, and Artificial Intelligence to business models that are based on data collection and processing practices. Consequently, there will be a need to bolster CCI’s capacity in the technical domain by hiring and integrating new roles including technologists, software and hardware engineers, product managers, UX designers, data scientists, investigative researchers, and subject matter experts dealing with new and emerging areas of technology.21 Therefore, we recommend CCI to ensure that the proposed DMDU has the requisite diversity of skills to effectively use existing tools for enforcement and is also able to keep pace with new and emerging technological developments.
Along with this overall observation of CCI's capacity, we have also submitted detailed comments on specific clauses of the draft DCB. These submissions are structured across the following six categories: i) Classification of Core Digital Services; ii) Designation of a Systemically Significant Digital Enterprise (SSDE) and Associate Digital Enterprise (ADE); iii) Obligations on SSDEs and ADEs; iv) Powers of the Commission to Conduct an Inquiry; v) Penalties and Appeals; and vi) Powers of the Central Government. In addition to these suggestions, the detailed comments and their summarised version focus on three important gaps in the draft DCB – limited representation from workers’ groups and MSMEs, exclusion of merger and acquisition (M&A) from the discussions, and lack of a formalised framework for interregulatory coordination.
For our full comments, click here
For a detailed summary of our comments, click here
A Guide to Navigating Your Digital Rights
The Digital Rights Guide gives practical guidance on the laws and procedures that affect internet freedoms. It covers the following topics:
- Internet Shutdowns
- Content Takedown
- Surveillance
- Device Seizure
The Digital Rights Guide can be viewed here.
Legal Advocacy Manual
Click to download the manual.
Draft Circular on Digital Lending – Transparency in Aggregation of Loan Products from Multiple Lenders
Edited and reviewed by Amrita Sengupta
The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on the internet and digital technologies from policy and academic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and practices around the internet, technology and society in India, and elsewhere.
CIS is grateful for the opportunity to submit comments on the “Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders” to the Reserve Bank of India. Over the last twelve years, CIS has worked extensively on research around privacy, online safety, cross border flows of data, security, and innovation. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem.
Introduction
The draft circular on ‘Transparency in Aggregation of Loan Products from Multiple Lenders’ is a much needed and timely document that builds on the Guidelines on Digital Lending. Both documents have maintained the principles of customer centricity and transparency at their core. Reducing information asymmetry and deceptive patterns in the digital lending ecosystem is of utmost importance, given the adverse effects experienced by borrowers. Digital lending is one of the fastest-growing fintech segments in India,[1] having grown exponentially from nine billion U.S. dollars in 2012 to nearly 150 billion dollars by 2020, and is estimated to reach 515 billion USD by 2030.[2] At the same time, accessing digital credit through digital lending applications has been found to be associated with a high risk to financial and psychological health due to a host of practices that lead to overindebtedness.[3] These include post contract exploitation through hidden transaction fees, abusive debt collection practices, privacy violations and fluctuations in interest rates. Both illegal/fraudulent and licensed lending service providers have been employing aggressive marketing and debt collection tactics[4] that exacerbate the risks of all the above harms.[5] With additional safeguards in place, the guidelines can provide a suitable framework to ensure borrowers have the opportunity and information needed to make an informed decision while accessing intermediated credit, and reduce harmful financial and health related consequences.
In this submission, we seek to provide some comments on the broader issues the guidelines address. Our comments recommend additional safeguards, keeping in mind the gamut of services provided by lending service providers (LSPs). We will frame our comment around two main concerns addressed by the draft guidelines: 1) reducing information asymmetry 2) market fairness. In addition to this we will share comments around a third concern that requires additional scrutiny, i.e. 3) data privacy and security.
Reducing Information Asymmetry
The guidelines aim to define responsibilities of LSPs in maintaining transparency to ensure borrowers are aware of the identity of the regulated entity (RE) providing the loan, and make informed decisions based on consistent information to weigh their options.
Comments: Guideline iii suggests that the digital view should include information that helps the borrower to compare various loan offers. This includes “the name(s) of the regulated entity (RE) extending the loan offer, amount and tenor of loan, the Annual Percentage Rate (APR) and other key terms and conditions” alongside a link to the key facts statement (KFS). The earlier ‘Guidelines on Digital Lending’ specifies that APR should be an all-inclusive cost including margin, credit costs, operating costs, verification charges, processing fees etc. only excluding penalties, and late payment charges.
Recommendations: All users of digital lending services may not be aware that APR is inclusive of all non-contingent charges. Requiring digital loan aggregators to provide messages/notifications boosting consumer awareness of regulations and their rights can help reduce violations. We also recommend that this information is made available in various languages such that a wide range of users are able to access this information. Further we recommend that accountability be laid on the LSPs to adhere to an inclusive platform design that allows for easy access to this information.
Market Fairness
Guidelines ii-iv also serve to outline practices to curb anti-competitive placement of digital loan products through regulating use of dark patterns and increasing transparency.
Comments: Section ii mandates that LSPs must disclose the approach utilised to determine the willingness of lenders to offer a loan. Whether this estimation includes factors associated with the customer profile like age, income and occupation etc. should be clearly disclosed as well.
Recommendations: Alongside the predictive estimate of the lender’s willingness, to improve transparency loan aggregators may be asked to share an overall rate of rejection or approval as well within the digital view.
While the ‘Guidelines on Digital Lending’[6] clearly state that LSPs must charge any fees from the REs and not the borrowers, further clarification should be provided on whether LSPs can charge fees for loan aggregation services themselves, i.e. for providing information of available loan products.
Privacy and Data Security
The earlier ‘Guidelines on Digital Lending’[7] require LSPs to only store minimal contact data regarding the customer and provide consumers the ability to seek their data being removed, i.e. the right to be forgotten by the provider, once they are no longer seeking their services. Personal financial information is not to be stored by LSPs. It is the responsibility of REs to ensure that LSPs do not store extraneous customer data, and to stipulate clear policy guidelines regarding the storage and use of customer data.
Comments: It is important to ascertain the nature of anonymised and personally identifiable customer data that may be currently utilised by LSPs or processed on their platforms, in the course of providing a range of services within the digital credit ecosystem to borrowers and lenders.
Certain functions that loan aggregators perform may expand their role beyond a simple intermediary. LSPs also provide services assessing borrower’s creditworthiness, payment services, and agent-led debt collection services for lenders. Some LSPs may be involved in more than one stage of the loan process which may make them privy to additional personal information about a borrower. There may be cases in which a consumer registers on an LSP’s platform without going ahead with any loan applications. It is unclear who is responsible for maintaining data security and privacy or providing grievance redressal at these times.
Section ii allows them to provide estimates of lenders’ willingness to borrowers. Some LSPs connecting REs with borrowers may also provide services using alternative and even non-financial data to assess the creditworthiness of thin-file credit seekers. Whether there are any restrictions on the use of AI tools in these processes, and the handling of customer data should also be clarified or limited. The right to be forgotten may be difficult to enforce with the use of certain machine learning and other artificial intelligence models. As innovation in credit scoring mechanisms continues, it is also important to bring such financial service providers under the ambit of guidelines for digital lending platforms.
Recommendations: The burden of maintaining privacy and data security should fall on aggregators of loan products in addition to regulated entities as well. Include guidelines limiting the use of PII (and PFI if applicable) for purposes other than connecting borrowers to a loan provider without consumer consent. Informed and explicit consumer consent should be sought for any additional purposes like marketing, market research, product development, cross-selling, delivery of other financial and commercial services, including providing access to other loan products in the future.
Often consumers are required to register on a platform by providing contact details and other personal information. An initial digital view of loan products available could be displayed for all users without registering to help borrowers determine whether they would like to register for the LSP’s services. This can help reduce the amount of consumer contact information and other personally identifiable information (PII) that is collected by LSPs.
Emerging Risks
Emerging consumer risks within the digital lending ecosystem expose borrowers to additional risks like over-indebtedness and risks arising from fraud, data misuse, lack of transparency and inadequate redress mechanisms.[8] These draft guidelines clearly layout mechanisms to reduce risks arising from lack of transparency. Similar efforts need to be put behind reduction of data misuse by delimiting the time period and – and the risk for overindebtedness
One of the biggest sources of consumer risk has been at the debt recovery stage. Aggressive debt collection practices have had deleterious effects on consumers’ mental health, social standing and even lead some to consider suicide. Extant guidelines assume a recovery agent will be contacting the consumer.[9] LSPs may also set up automated payments and use digital communication like app notifications, messages and automated calls in the debt recovery process as well. The impact of repeated notifications and automated debt payments also needs to be considered in future iterations of guidelines addressing risk in the digital lending ecosystem.
[1] “Funding distribution of FinTech companies in India in second quarter of 2023, by segment”, Statista, accessed 30 May 2024, https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/
[2] Anushka Sengupta, “India’s digital lending market likely to grow $515 bn by 2030: Report”, Economic Times, 17 June 2023, https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337
[3] “Mobile Instant Credit: Impacts, Challenges, and Lessons for Consumer Protection”, Center for Effective Global Action, September 2023, https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf
[4] Jinit Parmar, “Ruthless Recovery Agents, Aggressive Loan Outreach Put the Spotlight on Bajaj Finance”, Moneycontrol, 18 April 2023, https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html
[5] Prudhviraj Rupavath, “Suicide Deaths Mount after Unregulated Lending Apps Resort to Exploitative Recovery Practices”, Newsclick, 26 December 2020 https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices
Priti Gupta and Ben Morris, “India's loan scams leave victims scared for their lives”, BBC, 7 June 2022, https://www.bbc.com/news/business-61564038
[6] Section 4.1, Guidelines on Digital Lending, 2022.
[7] Section 11, Guidelines on Digital Lending, 2022.
[8] “The Evolution of the Nature and Scale of DFS Consumer Risks: A Review of Evidence”, CGAP, February 2022, https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf
[9] Section 2, Outsourcing of Financial Services - Responsibilities of regulated entities employing Recovery Agents, 2022.
Online Censorship: Perspectives From Content Creators and Comparative Law on Section 69A of the Information Technology Act
This paper was reviewed by Krishnesh Bapat and Torsha Sarkar.
Abstract: The Government of India has increasingly engaged in online censorship using powers in the Information Technology Act. The law lays out a procedure for online censorship that relies solely on the discretion of the executive. Using a constitutional and comparative legal analysis, we contend that the law has little to no oversight and lacks adequate due process for targets of censorship. Through semi-structured interviews with individuals whose content has been taken down by such orders, we shed light on experiences of content owners with government-authorised online censorship. We show that legal concerns about the lack of due process are confirmed empirically, and content owners are rarely afforded an opportunity for a hearing before they are censored. The law enabling online censorship (and its implementation) may be considered unconstitutional in how it inhibits avenues of remedy for targets of censorship or for the general public. We also show that online content blocking has far-reaching, chilling effects on the freedom of expression.
The paper is available on SSRN, and can also be downloaded here.
AI for Healthcare: Understanding Data Supply Chain and Auditability in India
Read our full report here.
The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.
For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.
Our primary research questions are as follows:
-
What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?
-
What auditing practices, if any, are being followed by technology companies and healthcare institutions?
-
What best practices can organisations based in India adopt to improve AI auditability?
This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:
-
Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance on datasets from the Global North.
-
Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.
-
Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.
-
There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.
-
Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.
-
The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.
Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:
-
Improve data management across the AI data supply chain
Adopt standardised data-sharing policies. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare.
Emphasise not just data quantity but also data quality. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.
-
Streamline AI auditing as a form of governance
Standardise the practice of AI auditing. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.
Build organisational knowledge and inter-stakeholder collaboration. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.
Prioritise transparency and public accountability in auditing standards. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.
-
Centre public good in India’s AI industrial policy
Adopt focused and transparent approaches to investing in and financing AI projects. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.
Strengthen regulatory checks and balances for AI governance.
While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.
Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper
Read the full paper here.
Political participation of women is fundamental to democratic processes and promotes building of more equitable and just futures. Rapid adoption of technology has created avenues for women to access the virtual public sphere, where they may have traditionally struggled to access the physical public spaces, due to patriarchal norms and violence in the physical sphere. While technology has provided tools for political participation, information seeking, and mobilization, it has also created unsafe online spaces for women, thus often limiting their ability to actively engage online.
This essay examines the emotional and technological underpinnings of gender-based violence faced by women in politics. It further explores how gender-based violence is weaponised to diminish the political participation and influence of women in the public eye. Through real-life examples of gendered disinformation and sexist hate speech targeting women in politics in India, we identify affective patterns in the strategies deployed to adversely impact public opinion and democratic processes. We highlight the emotional triggers that play a role in exacerbating online gendered harms, particularly for women in public life. We also examine the critical role of technology and online platforms in this ecosystem – both in perpetuating and amplifying this violence as well as attempting to combat it.
We argue that it is critical to investigate and understand the affective structures in place, and the operation of patriarchal hegemony that continues to create unsafe access to public spheres, both online and offline, for women. We also advocate for understanding technology design and identifying tools that can actually aid in combating TFGBV. Further, we point to the continued need for greater accountability from platforms, to mainstream gender related harms and combat it through diversified approaches.
Privacy Policy Framework for Indian Mental Health Apps
The report’s findings indicate a significant gap in the structure and content of privacy policies in Indian mental health apps. This highlights the need to develop a framework that can guide organisations in developing their privacy policies. Therefore, this report proposes a holistic framework to guide the development of privacy policies for mental health apps in India. It focuses on three key segments that are an essential part of the privacy policy of any mental health app. First, it must include factors considered essential by the Digital Personal Data Protection Act 2023 (DPDPA) such as consent mechanisms, rights of the data principal, provision to withdraw consent etc. Second, the privacy policy must state how the data provided by them to these apps will be used. Finally, developers must include key elements, such as provisions for third-party integrations and data retention policies.”
Click to download the full research paper here
Digital Rights and ISP Accountability in India: An Analysis of Policies and Practices
Read the full report here.
India's four largest Internet Service Providers (ISPs)—Reliance Jio, Bharti Airtel, Vodafone-Idea (Vi), and BSNL collectively serve 98% of India's internet subscribers, with Jio and Airtel commanding a dominant market share of 80.87%. The assessment comes at a critical juncture in India's digital landscape, marked by a 279.34% increase in internet subscribers from 2014 to 2024, alongside issues such as proliferation of internet shutdowns.
Adapting the Ranking Digital Rights' (RDR) 2022 methodology framework for its 2022 Telco Giants Scorecard, our analysis reveals significant disparities in governance structures and commitment to digital rights across these providers. Bharti Airtel emerges as the leader in governance framework implementation, maintaining dedicated human rights policies and board-level oversight. In contrast, Vi and Jio demonstrate mixed results with limited explicit human rights commitments, while BSNL exhibits the weakest governance structure with minimal human rights considerations. Notably, all ISPs lack comprehensive human rights impact assessments for their advertising and algorithmic systems.
The evaluation of freedom of expression commitments reveals systematic inadequacies across all providers. Terms and conditions are frequently fragmented and difficult to access, while providers maintain broad discretionary powers for account suspension or termination without clear appeal processes. There is limited transparency regarding content moderation practices and government takedown requests, coupled with insufficient disclosure about algorithmic decision-making systems that affect user experiences.
Privacy practices among these ISPs show minimal evolution since previous assessments, with persistent concerns about policy accessibility and comprehension. The investigation reveals limited transparency regarding algorithmic processing of personal data, widespread sharing of user data with third parties and government agencies, and inadequate user control over personal information. None of the evaluated ISPs maintain clear data breach notification policies, raising significant concerns about user data protection.
The concentrated market power of Jio and Airtel, combined with weak digital rights commitments across the sector, raises substantial concerns about the state of user privacy and freedom of expression in India's digital landscape. The lack of transparency in website blocking and censorship, inconsistent implementation of blocking orders, limited accountability in handling government requests, insufficient protection of user rights, and inadequate grievance redressal mechanisms emerge as critical areas requiring immediate attention.
As India continues its rapid digital transformation, our findings underscore the urgent need for both regulatory intervention and voluntary industry reforms. The development of standardised transparency reporting, strengthened user rights protections, and robust accountability mechanisms will be crucial in ensuring that India's digital growth aligns with fundamental rights and democratic values.
Do We Need a Separate Health Data Law in India?
Chapter 1.Background
Digitisation has become a cornerstone of India’s governance ecosystem since the National e-Governance Plan (NeGP) of 2006. This trend can also be seen in healthcare, especially during the COVID-19 pandemic, with initiatives like the Ayushman Bharat Digital Mission (ABDM). However, the digitisation of healthcare has been largely conducted without legislative backing or judicial oversight. This has resulted in inadequate grievance redressal mechanisms, potential data breaches, and threats to patient privacy.
Unauthorised access to or disclosure of health data can result in stigmatisation, mental and physical harassment, and discrimination against patients. Moreover, because of the digital divide, overdependence on digital health tools to deliver health services can lead to the exclusion of the most marginalised and vulnerable sections of society, thereby undermining the equitable availability and accessibility of health services. Health data in the digitised form is also vulnerable to cyberattacks and breaches. This was evidenced in the recent ransomware attack on All India Institute of Medical Science, which, apart from violating the right to privacy of patients, also brought patient care to a grinding halt.
In this context, and with the rise in health data collection and uptick in the use of AI in healthcare, there is a need to look at whether India needs a standalone legislation to regulate the digital health sphere. It is also necessary to evaluate whether the existing policies and regulations are sufficient, and if amendments to these regulations would suffice.
This report discusses the current definitions of health data including international efforts, the report then proceeds to share some key themes that were discussed at three roundtables we conducted in May, August, and October 2024. Participants included experts from diverse stakeholder groups, including civil society organisations, lawyers, medical professionals, and academicians. In this report, we collate the various responses to two main aspects, which were the focus of the roundtables:
- In which areas are the current health data policies and laws lacking in India?
- Do we need a separate health data law for India? What are the challenges associated with this? What are other ways in which health data can be regulated?
Chapter 2. How is health data defined?
There are multiple definitions of health data globally. These include those incorporated into the text of data protection legislations or under separate health data laws. In the European Union (EU), the General Data Protection Regulation defines “data concerning health” as personal data that falls under special category data. This includes data that requires stringent and special protection due to its sensitive nature. Data concerning health is defined under Article(Article 4[15]) as “personal data related to the physical or mental health of a natural person, including the provision of healthcare services, which reveal information about his or her health status”. The United States has the Health Insurance Portability and Accountability Act (HIPAA), which was created to make sure that the personally identifiable information (PII) gathered by healthcare and insurance companies is protected against fraud and theft and cannot be disclosed without consent. As per the World Health Organisation (WHO), ‘digital health’ refers to “a broad umbrella term encompassing eHealth, as well as emerging areas, such as the use of advanced computing sciences in ‘big data’, genomics and artificial intelligence”.
2.1. Current legal framework for regulating the digital healthcare ecosystem in India
In India the digital health data had been defined under the draft Digital Information Security in Healthcare Act (DISHA), 2017, as an electronic record of health-related information about an individual. and includes the following: (i) information concerning the physical or mental health of the individual; (ii) information concerning any health service provided to the individual; (iii) information concerning the donation by the individual of any body part or any bodily substance; (iv) information derived from the testing or examination of a body part or bodily substance of the individual; (v) information that is collected in the course of providing health services to the individual; or (vi) information relating to the details of the clinical establishment accessed by the individual.
However, DISHA was subsumed into the 2019 version of the Personal Data Protection Act, called The Data and Privacy Protection Bill, which had a definition of health data and a demarcation between sensitive personal data and personal data. Both these definitions are absent from the Digital Personal Data Protection Act (DPDPA), 2023. This makes uncertain what is defined as health data in India. It is also important to note that the health data management policies released during the pandemic relied on the definition of health data under the then draft of the Personal Data Protection Act.
(i) Drugs and Cosmetic Act, and Rules
At present, there is no specific law that regulates the digital health ecosystem in India. The ecosystem is currently regulated by a mix of laws regulating the offline/legacy healthcare system and policies notified by the government from time to time. The primary law governing the healthcare system in India is the Drugs and Cosmetics Act (DCA), 1940, read with the Drugs and Cosmetic Rules, 1945. These regulations govern the manufacture, sale, import, and distribution of drugs in India. The central and state governments are responsible for enforcing the DCA. In 2018, the central government published the Draft Rules to amend the Drugs and Cosmetics Rules in order to incorporate provisions relating to the sale of drugs by online pharmacies (Draft Rules). However, the final rules are yet to be notified. The Draft Rules prohibit online pharmacies from disclosing the prescriptions of patients to any third person. However, they also mandate the disclosure of such information to the central and state governments, as and when required for public health purposes.
(ii) Clinical Establishments (Registration and Regulation) Act, and Rules
The Clinical Establishments Rules, 2012, which are issued under the Clinical Establishments (Registration and Regulation) Act, 2010, require clinical establishments to maintain electronic health records (EHRs) in accordance with the standards determined by the central government. The Electronic Health Record (EHR) Standards, 2016, were formulated to create a uniform standards-based system for EHRs in India. They provide guidelines for clinical establishments to maintain health data records as well as data and security measures. Additionally, they also lay down that ownership of the data is vested with the individual, and the healthcare provider holds such medical data in trust for the individual.
(iii) Health digitisation policies under the National Health Authority
In 2017, the central government formulated the National Health Policy (NHP). A core component of the NHP is deploying technology to deliver healthcare services. The NHP recommends creating a National Digital Health Authority (NDHA) to regulate, develop, and deploy digital health across the continuum of care. In 2019, the Niti Aayog, proposed the National Digital Health Blueprint (Blueprint). The Blueprint recommended the creation of the National Digital Health Mission. The Blueprint made this proposition stating that “the Ministry of Health and Family Welfare has prioritised the utilisation of digital health to ensure effective service delivery and citizen empowerment so as to bring significant improvements in public health delivery”. It also stated that an institution such as the National Digital Health Mission (NDHM), which is undertaking significant reforms in health, should have legal backing.
(iv) Telemedicine Practice Guidelines
On 25 March 2020, the Telemedicine Practice Guidelines under the Indian Medical Council Act were notified. The Guidelines provide a framework for registered medical practitioners to follow for teleconsultations.
2.2. Digital Personal Data Protection Act, 2023
There has been much hope for India’s data protection legislation in India to cover definitions of health data, keeping in mind the removal of DISHA and the uptick in health digitisation in both the public and private health sectors. The privacy/data protection law, the DPDPA was notified on 12 August 2023. However, the provisions have still not come into force. So, currently, health data and patient medical history are regulated by the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules (SPDI Rules), 2011. The SPDI Rules will be replaced by the DPDA as and when its different provisions are enforced. On 3 January 2025, the Ministry of Electronics and Information Technology released the Draft Digital Personal Data Protection Rules, 2025, for public consultation. The last date for submitting the comments is 18 February 2025.
Health data is regarded as sensitive personal data under the SPDI Rules. Earlier drafts of the data protection legislation had demarcated data as personal data and sensitive personal data, and health data was regarded as sensitive personal data. However, the DPDA has removed the distinction between personal data and sensitive personal data. Instead, all data is regarded as personal data. Therefore, the extra protection that was previously afforded to health data has been removed. The Draft Rules also do not mention health data or provide any additional safeguards when it comes to protecting health data. However, it exempts healthcare professionals from the obligations that have been put on data fiduciaries when it comes to processing children’s data. The processing has to be restricted to the extent necessary to protect the health of the child.
As seen so far, while there are multiple healthcare-related regulations that govern stakeholders – from medical device manufacturers to medical professionals – there is still a vacuum in terms of the definition of health data. The DPDPA does not clarify this definition. Further, there are no clear guidelines for how these regulations work with one another, especially in the case of newer technologies like AI, which have already started disrupting the Indian health ecosystem.
Chapter 3. Key takeaways from the health data roundtables
The three health data roundtables covered various important topics related to health data governance in India. The first roundtable highlighted the major concerns and examined the granular details of considering a separate law for digital healthcare. The second round table featured a detailed discussion on the need for a separate law, or whether the existing laws can be modified to address extant concerns. There was also a conversation on whether the absence of a classification absolves organisations from the responsibility to protect or secure health data. Participants stated that due to the sensitivity of health data, data fiduciaries processing health data could qualify it as significant data fiduciary under the the proposed DPDPA Rules (that were at the time of hosting the roundtables) yet to be published. The final roundtable concluded with an in-depth discussion on the need for a health data law. However, no consensus has emerged among the different stakeholders.
The roundtables highlighted that the different stakeholders – medical professionals, civil society workers, academics, lawyers, and people working in startups – were indeed thinking about how to regulate health data. But there was no single approach that all agreed on.
3.1. Health data concerns
Here, we summarise the key points that emerged during the three roundtables. These findings shed light on concerns regarding the collection, sharing, and regulation of health data.
(i) Removal of sensitive personal data classification
In the second roundtable, there was a discussion on the removal of the definition of health data from the final version of the DPDPA, which also removed the provision for sensitive personal data; health data previously came under this category. One participant stated that differentiating between sensitive personal data and data was important, as sensitive personal data such as health data warrants more security. They further stated that without such a clear distinction, data such as health status and sexual history could be easily accessed. Participants also pointed out that given the current infrastructure of digital data, the security of personal data is not up to the mark. Hence a clear classification of sensitive and personal data would ensure that data fiduciaries collecting and processing sensitive personal data would have greater responsibility and accountability.
(ii) Definition of informed consent
The term ‘informed consent’ came up several times during the roundtable discussions. But there was no clarity on what it means. A medical professional stated that in their practice, informed consent applies only to treatment. However, if the patient’s data is being used for research, it goes through the necessary internal review board and ethics board for clearance. One participant mentioned that the Section 2(i) of the Mental Healthcare Act (MHA), 2017 defines informed consent as
consent given for a specific intervention, without any force, undue influence, fraud, threat, mistake or misrepresentation, and obtained after disclosing to a person adequate information including risks and benefits of, and alternatives to, the specific intervention in a language and manner understood by the person; a nominee to make a decision and consent on behalf of another person.
Neither the DPDA nor the Draft DPDPA Rules define informed consent. However, the Draft DPDA Rules state that the notice given by the data fiduciary to the data principal must use simple, plain language to provide the data principal with a full and transparent account of the information necessary so that they can provide informed consent to process their personal data.
A stakeholder pointed out that consent is taken without much nuance or the option for choice or nuance. Indeed, consent is often presented in non-negotiable terms, creating power imbalances and undermining patient autonomy. Suggested solutions include instituting granular and revocable consent mechanisms. This point also emerged during the third roundtable, where it was highlighted that consenting to a medical procedure was different from consenting to data being used to train AI. When a consent form that a patient or caregiver is asked to sign gives the relevant information and no choice but to sign, it creates a severe power imbalance. Participants also emphasised that there was a need to assess if consent was being used as a tool to enable more data-sharing or a mechanism for citizens to be given other rights, such as the reasonable expectation that their medical information would not be used for commercial interests, especially to their own detriment, just because they signed a form. One suggested way to tackle this is for there to be greater demarcation of the aspects a person could consent to. This would give people more control over the various ways in which their data is used.
(iii) Data sharing with third parties
Discussions also focused on the concerns about sharing health data with third parties, especially if the data is transferred outside India. Data is/can be shared with tech companies and research organisations. So the discussions highlighted the regulations and norms governing how such data sharing occurs despite the fragmented regulations. For instance:
- Indian Council of Medical Research (ICMR) Ethical guidelines for application of Artificial Intelligence in Biomedical Research and Healthcare mandate strict protocols for sharing health data, but these are not binding. They state that the sharing of health data by medical institutions with tech companies and collaborators, must go through the ICMR and Health Ministry’s Screening Committee. This committee has strict guidelines on how and how much data can be shared and how it needs to be shared. The process also requires that all PII is removed and only 10 percent of the total data is permitted to be shared with any collaborator outside of any Indian jurisdiction.
- Companies working internationally have to comply with global standards like the GDPR and HIPAA, highlighting the gaps in India’s domestic framework which leaves the companies uncertain of which regulations to comply with. There is a need to balance the interests of startups that require more data and better longitudinal health records, and the need for strong data protection, data minimisation, and storage limitation.
(iv) Inadequate healthcare infrastructure
With respect to the implementation challenges associated with health data laws, participants noted that, currently, the Indian healthcare infrastructure is not up to the mark. Moreover, smaller and rural hospitals are not yet on board with health digitisation and may not be able to comply with additional rules and responsibilities. In terms of capacity as well, smaller healthcare facilities lack the resources to implement and comply with complex regulations.
3.2. Regulatory challenges
Significant time was spent on discussing the regulatory challenges and deficiencies in India’s healthcare infrastructure. The discussion primarily revolved around the following points:
(i) State vs. central jurisdiction
Under the Constitutional Scheme, legislative responsibilities for various subjects are demarcated between the centre and the states, and are sometimes shared between them. The topics of public health and sanitation, hospitals, and dispensaries fall under the state list set out in the Seventh Schedule of the Constitution. This means that state governments have the primary responsibility of framing and implementing laws on these subjects. Under this, local governance institutions, namely local bodies, also play an important role in discharging public health responsibilities.
(ii) Do we bring back DISHA?
During the conversation about the need for the health data regulation, participants brought up that there had been an earlier push for a health data law in the form of DISHA, 2017. But this was later abandoned. DISHA aimed to set up digital health authorities at the national and state levels to implement privacy and security measures for digital health data and create a mechanism for the exchange of electronic health data. Another concern that arose with respect to having a central health data legislation was that, as health is a state subject, there could be confusion about having a separate, centralised regulatory body to oversee how the data is being handled. This might come with a lack of clarity on who would address what, or which ministry (in the state or central government) would handle the redressal mechanism.
3.3. Are the existing guidelines enough?
Participants highlighted that enacting a separate law to regulate digital health would be challenging, considering that the DPDPA took seven years to be enacted, the rules are yet to be drafted, and the Data Protection Board has not been established. Hence, any new legislation would take significant resources, including manpower and time.
In this context, there were discussions acknowledging that although the DPDPA does not currently regulate health data, there are other forms of regulation and policies that are prescribed for specific types of interventions when it comes to health data; for example, the Telemedicine Practice Guidelines, 2020, and the Medical Council of India Rules. These are binding on medical practitioners, with penalties for non-conforming, such as the revoking of medical licenses. Similarly the ICMR guidelines on the use of data in biomedical research include specific transparency measures, and existing obligations on health data collectors that would work irrespective of the lack of distinction between sensitive personal data and personal data under the DPDPA.
However, another participant rightly pointed out that the ICMR guidelines and the policies from the Ministry of Health and Family Welfare are not binding. Similarly, regulations like the Telemedicine Practice Guidelines and Indian Medical Council Act are only applicable to medical practitioners. There are now a number of companies that collect and process a lot of health data; they are not covered by these regulations. Although there are multiple regulations on healthcare and pharma, none of them cover or govern technology. The only relevant one is the Telemedicine Practice Guidelines, which say that AI cannot advise any patient; it can only provide support.
Chapter 4. Recommendations
Several key points were raised and highlighted during the three roundtables. There were also a few suggestions for how to regulate the digital health sphere. These recommendations and points can be classified into short-term measures and long-term measures.
4.1. Short-term measures
We propose two short-term measures, as follows:
(i) Make amendments to the DPDPA Introduce sector-specific provisions for health data within the existing framework. The provisions should include guidelines for informed consent, data security, and grievance redressal.
(ii) Capacity-building Provide training for healthcare providers and data fiduciaries on data security and compliance.
4.2. Long-term measures
We offer six long-term measures, as follows:
(i) Standalone legislation Enact a dedicated health data law that
- Defines health data and its scope; ● Establishes a regulatory authority for oversight; and
- Includes provisions for data sharing, security, and patient rights.
(ii) National Digital Health Authority
Establish a central authority, similar to the EU’s Health Data Space, to regulate and monitor digital health initiatives.
(ii) Cross-sectoral coordination
Develop mechanisms to align central and state policies and ensure seamless implementation.
(v) Technological safeguards
Encourage the development of AI-specific policies and guidelines to address the ethics of using health data.
(vi) Stringent measures to address data breaches
Increase the trust of people by addressing data breaches, and fostering proactive dialogue between patients, medical community, government and civil society. Reduce the exemption for data processing, such as that granted to the state for healthcare
Conclusion
The roundtable discussions highlighted the fragmented nature of the digital health sphere, and the issues that emanate from such a fractured polity. Considering the variations in the healthcare infrastructure and budget allocation across different states, the feasibility of enacting a central digital health law requires more in-depth research. The existing laws governing the offline/legacy health space also need careful examination to understand whether amendments to these laws are sufficient to regulate the digital health space.
The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development
With research assistance by Anuj Singh
I. Background
On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document.
CIS’ comments are inline with the submission guidelines, we have provided both comments and suggestions based on the headings and text provided in the report.
II. Governance of AI
The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now “perform complex tasks without active human control or supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the OECD AI principles which this report draws inspiration from.
A. AI Governance Principles
A proposed list of AI Governance principles (with their explanations) is given below.
While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken, to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.
B. Considerations to operationalise the principles
1. Examining AI systems using a lifecycle approach
The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. Chen et al. (2023), De Silva and Alahakoon (2022)) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others (Ng et al. (2022) have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s Responsible AI Playbook follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.
2. Taking an ecosystem-view of AI actors
While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body, it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static.
While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.
3. Leveraging technology for governance
While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the Bhumi programme in Andhra Pradesh that used blockchain for land records, however this also led to the weakening of local institutions, and also led to exclusion of marginalised people Kshetri (2021). It was also stated that there was a need to strengthen existing institutions before using a technological measure.
Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack tough exams in the USA. In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of confidential information.
Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.
III. GAP ANALYSIS
A. The need to enable effective compliance and enforcement of existing laws.
The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India are robust enough to deal with AI, and the change in technology and technology developers.
There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.
We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to lack of clarity.
Below we are adding our comments and suggestions certain subsections in this section on The need to enable effective compliance and enforcement of existing laws
1. Intellectual property rights
a. Training models on copyrighted data and liability in case of infringement
While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making non-expressive uses of work, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.
This report states “The Indian law permits a very closed list of activities in using copyrighted data without permission that do not constitute an infringement. Accordingly, it is clear that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act, 1957 is extremely narrow. Commercial research is not exempted; not-for-profit 10 institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete with the existing copyrighted work is exempted. “
Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research.
If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate. Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability
b. Copyrightability of work generated by using foundation models
We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.
C. The need for a whole-of-government approach.
While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms. The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on AI in healthcare indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.
Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.
IV. Recommendations
1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.
We would like to reiterate the earlier section and highlight the importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.
2.To develop a systems-level understanding of India’s AI ecosystem, MeitY should establish, and administratively house, a Technical Secretariat to serve as a technical advisory body and coordination focal point for the Committee/ Group.
The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making. The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making, there is also risk of regulatory capture. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.
3.To build evidence on actual risks and to inform harm mitigation, the Technical Secretariat should establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes.
The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.
It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual being denied health insurance because of AI bias. With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to The OECD AI Incidents Monitor.
4. To enhance transparency and governance across the AI ecosystem, the Technical Secretariat should engage the industry to drive voluntary commitments on transparency across the overall AI ecosystem and on baseline commitments for high capability/widely deployed systems.
It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.
While the transparency measures listed will ensure better understanding of processes of AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on AI data supply chain & auditability and healthcare in India, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.
5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.
It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), stated that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries.
We would also like to highlight that during the consultations on the DIA it was proposed to replace the Information Technology Act 2000. It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.
The Centre for Internet and Society’s comments and feedback to the: Digital Personal Data Protection Rules 2025
Rule 3 - Notice given by data fiduciary to data principal - Under Section 5(2) of the DPDP Act, when the personal data of the data principal has been processed before the commencement of the Act, then the data fiduciary is required to give notice to the data principal as soon as reasonably practicable. However, the Rules fail to specify what is meant by reasonably practicable. The timeline for a notice in such circumstances is unclear.
- In addition, under Rule 3(a) the phrase “be presented and be understandable independently” is ambiguous. It is not clear whether the consent notice has to be presented independently of any other information or whether it only needs to be independently understandable and can be presented along with other information.
- In addition to this we suggest that the need for “privacy by design” mentioned in the earlier drafts is brought back, with the focus on preventing deceptive design practices (dark patterns) being used while collecting data.
Rule 4 - Registration and obligations of Consent Manager- The concept of independent consent managers, similar to account aggregators in the financial sector, and consent manager platforms in the EU is a positive step. However, the Act and the Rules need to flesh out the interplay between the Data Fiduciary and the Consent Managers in a more detailed manner, for example, how does the data fiduciary know if a data principal is using a consent manager, and under what circumstances can the data fiduciary bypass the consent manager, what is the penalty/consequence, etc.
Rule 6 - Reasonable security safeguards - While we appreciate the guidance provided in terms of the measures for security such as “encryption, obfuscation or masking or the use of virtual tokens”, it would also be good to refer to the SPDI Rules and include the example of the The international Standard IS/ISO/IEC 27001 on Information Technology - Security Techniques - Information Security Management System as an illustration to guide data fiduciaries.
Rule 7 - Intimation of personal data breach - As per the Rules, the data fiduciary on becoming aware of any personal data breach is required to notify the data principal and the Data Protection Board without delay; a plain reading of this Rule suggests that data fiduciary has to report the breach almost immediately, and this could be a practical challenge. Further, the absence of any threshold (materiality, gravity of the breach, etc) for notifying the data principal means that the data fiduciary will have to inform the data principal about even an isolated data breach which may not have an impact on the data principal. In this context, we recommend the Rule be amended to state that the data fiduciary should be required to inform the Data Protection Board about every data breach, however the data principal should be informed depending on the gravity and materiality of the breach and when it is likely to result in high risk to the data principal.
- Whilst the Rules have provisions for intimation of data breach, there is no specific provision requiring the Data Fiduciary to take all steps necessary to ensure that the Data Fiduciary has taken all necessary measures to mitigate the risk arising out of the said breach. Although there is an obligation to report any such measures to the Data Principal (Rule 7(1)(c)) as well as to the DPBI (Rule 7(2)(b)(iii)), there is no positive obligation imposed on the Data Fiduciary to take any such mitigation measures. The Rules and the Act merely presume that the Data Fiduciary would take mitigation measures, perhaps that is the reason why there are notification requirements for such breach, however the Rules and the Act do not put any positive obligation on the Data Fiduciary to actually implement such measures. This would lead to a situation where a Data Fiduciary may not take any measures to mitigate the risks arising out of the data breach, and be in compliance with its legal obligations by merely notifying the Data Principal as well as the DPBI that no measures have been taken to mitigate the risks arising from the data breach. In addition, the SPDI Rules state that in an event of a breach the body corporate is required to demonstrate that they had implemented reasonable security standards. This provision could be incorporated in this Rule to emphasize on the need to implement robust security standards which is one of the ways to curb data breaches from happening, and ensure that there is a protocol to mitigate the breach.
Rule 10 - Verifiable consent for processing of personal data of child or of person with disability who has a lawful guardian - The two mechanisms provided under the Rules to verify the age and identity of parents pre-suppose a high degree of digital literacy on the part of the parents. They may either give or refuse consent without thinking too much about the consequences arising out of giving or not giving consent. As there is always a risk of individuals not providing the correct information regarding their age or their relationship with the child, platforms may have to verify every user’s age; thereby preventing users from accessing the platform anonymously. Further, there is also a risk of data maximisation of personal data rather than data minimisation; i.e parents may be required to provide far more information than required to prove their identity. One recommendation/suggestion that we propose is to remove the processing of children's personal data from the ambit of this law, and instead create a separate standalone legislation dealing with children’s digital rights. Another important issue to highlight here is the importance of the Digital Protection Board and its capacity to levy fines and impose strictures on the platforms. We have seen from examples from other countries that platforms are forced to redesign and provide for better privacy and data protection mechanisms when the regulator steps in and imposes high penalties
Rule 12 - Additional obligations of Significant Data Fiduciary - The Rules do not clarify which entities will be considered as a Significant Data Fiduciary, leaving that to the government notifications. This creates uncertainty for data fiduciaries, especially smaller organisations that might not be able to set up the mechanisms and people for conducting data protection impact assessment, and auditing. The Rule provides that SDFs will have to conduct an annual Data Protection Impact Assessment. While this is a step in the right direction, the Rules are currently silent on the granularity of the DPIA. Similarly for “audit” the Rules do not clarify what type of audit is needed and what the parameters are. It is therefore imperative that the government notifies the level of details that the DPIA and the audit need to go into in order to ensure that the SDFs actually address issues where their data governance practices are lacking and not use the DPIA as a whitewashing tactic.There is also a need to reduce some of the ambiguity with regards to the parameters, and responsibilities in order to make it easier for startups and smaller players to comply with the regulations. In addition, while there is a need to protect data and increase responsibility on organisations collecting sensitive data or large volumes of data, there is a need to look beyond compliance and look at ways that preserve the rights of the data principal. Hence significant data fiduciaries should also be given the added responsibility of collecting explicit consent from the data principal, and also have easier access for correction of data, grievance redressal and withdrawal of consent.
Rule 14 - Processing of personal data outside India - As per section 16 of the Act the government could, by notification, restrict the transfer of data to specific countries as notified. This system of a negative list envisaged under the Act appears to have been diluted somewhat by the use of the phrase “any foreign State” under the Rules. This ambiguity should be addressed and the language in the Rules may be altered to bring it in line with the Act. Further, the rules also appear to be ultra vires to the Act. As per the DPDP Act, personal data could be shared to outside India, except to countries which were on the negative list, however, the dilution of the provision through the rules appears to have now created a white list of countries; i.e. permissible list of countries to which data can be transferred.
Rule 15 Exemption from Act for research, archiving or statistical purposes- While creating an exception for research and statistical purposes is an understandable objective, the current wording of the provision is vague and subject to mischief. The objective behind the provision is to ensure that research activities are not hindered due to the requirements of taking consent, etc. as required under the Act. However the way the provision is currently drafted, it could be argued that a research lab or a research centre established by a large company, for e.g. Google, Meta, etc. could also seek exemptions from the provisions of this Act for conducting “research”. The research conducted may not be shared with the public in general and may be used by the companies that funded/established the research centre. Therefore there should be further conditions attached to this provision, that would keep such research centers outside the purview of the exemption. Conditions such as making the results of the research publicly available, public interest, etc. could be considered for this purpose.
Rule 22 - Calling for Information from data fiduciary or intermediary - This rule read with the seventh schedule appears to dilute the data minimisation and purpose limitation provisions provided for in the Act. The wide ambit of powers appears to be in contravention of the Supreme Court judgement in the Puttaswamy case, which places certain restrictions on the government while collecting personal data. This “omnibus” provision flouts guardrails like necessity and proportionality that are important to safeguard the fundamental right to privacy.
It should be clarified whether this rule is merely an enabling provision to facilitate sharing of information, and only designated competent authorities as per law can avail of this provision. Need for Confidentiality
Additionally, the rule mandates that the government may “require the Data Fiduciary or intermediary to not disclose” any request for information made under the Act. There is no requirement of confidentiality indicated in the governing section, i.e. section 36, from which Rule 22 derives its authority. Talking about the avoidance of secrecy in government business, the Supreme Court in the State of U.P. v. Raj Narain, (1975) 4 SCC 428 has held that
“In a government of responsibility like ours, where all the agents of the public must be responsible for their conduct, there can but few secrets. The people of this country have a right to know every public act, everything, that is done in a public way, by their public functionaries. They are entitled to know the particulars of every public transaction in all its bearing. The right to know, which is derived from the concept of freedom of speech, though not absolute, is a factor which should make one wary, when secrecy is claimed for transactions which can, at any rate, have no repercussions on public security (2). To cover with [a] veil [of] secrecy the common routine business, is not in the interest of the public. Such secrecy can seldom be legitimately desired. It is generally desired for the purpose of parties and politics or personal self-interest or bureaucratic routine. The responsibility of officials to explain and to justify their acts is the chief safeguard against oppression and corruption.”
In order to ensure that state interests are also protected, there may be an enabling provision whereby in certain instances confidentiality may be maintained, but there has to be a supervisory mechanism whereby such action may be judged on the anvil of legal propriety.
Education, Epistemologies and AI: Understanding the role of Generative AI in Education
Emotional Contagion: Theorising the Role of Affect in COVID-19 Information Disorder
By incorporating theoretical frameworks from psychology, sociology, and communication studies, we reveal the complex foundations of both the creation and consumption of misinformation. From this research, fear emerged as the predominant emotional driver in both the creation and consumption of misinformation, demonstrating how negative affective responses frequently override rational analysis during crises. Our findings suggest that effective interventions must address these affective dimensions through tailored digital literacy programs, diversified information sources on online platforms, and expanded multimodal misinformation research opportunities in India.
Click to download the research paper
The Cost of Free Basics in India: Does Facebook's 'walled garden' reduce or reinforce digital inequalities?
In 2015, Facebook introduced internet.org in India and it faced a lot of criticism. The programme was relaunched as the Free Basics programme, ostensibly to provide, free of cost, access to the Internet to the economically deprived section of society. The content, i.e. websites, were pre-selected by Facebook and was provided by third-party providers. Later, Telecom Regulatory Authority of India (TRAI) ruled in favor of net neutrality, banning the program in India. A crucial conversation in this debate was also about whether the Free Basics program was going to actually be helpful for those it set out to support.
This paper examines Facebook’s Free Basics programme and its perceived role in bridging digital divides, in the context of India, where it has been widely debated, criticized and finally banned in a ruling from Telecom Regulatory Authority of India (TRAI). While the debate on the Free Basics programme has largely been embroiled around the principles of network neutrality, this paper will try to examine it from an ICT4D perspective, embedding the discussion around key development paradigms.
This essay begins by introducing the Free Basics programme in India and the associated proceedings, following which existing literature is reviewed to explore the concept of development, the perceived role of ICT in development, thus laying the scope of this discussion. The essay then examines the question of whether the Free Basics programme reduces or reinforces digital inequality by looking at 3 development paradigms: (1) Construction of knowledge, power structures and virtual colonization in the Free Basics Programme, (2) A sub-internet of the marginalized: looking at second level digital divides and (3) the Capabilities Approach and premise of connectivity as a source of equality and freedom.
The essay concludes with a view that the need for digital access should be viewed as a subset of overall contextual development as opposed to programs unto themselves and taking purely techno-solutionist approaches. There is a requirement for effective needs identification as part of ICT4D research to locate the users at the center and not at the periphery of the discussions. Lastly, policymakers should look into the addressal of more basic concerns like that of access and connectivity and not just on solutions which can be claimed as “quick-wins” in policy implementation.
Mapping the Legal and Regulatory Frameworks of the Ad-Tech Ecosystem in India
In this paper, we try to map the legal and regulatory framework dealing with Advertising Technology (Adtech) in India as well as a few other leading jurisdictions. Our analysis is divided into three main parts, the first being general consumer regulations, which apply to all advertising irrespective of the media – to ensure that advertisements are not false or misleading and do not violate any laws of the country. This part also covers the consumer laws which are specific to malpractices in the technology sector such as Dark Patterns, Influencer based advertising, etc.
The second part of the paper covers data protection laws in India and how they are relevant for the Adtech industry. The Adtech industry requires and is based on the collection and processing of large amounts of data from the users. It is therefore important to discuss the data protection and consent requirements that have been laid out in the spate of recent data protection regulations, which have the potential to severely impact the Adtech industry.
The last part of the paper covers the competition angle of the Adtech industry. Like with social media intermediaries, the Adtech industry in the world is also dominated by two or three players and such a scenario always lends itself easily to anti-competitive practices. It is therefore imperative to examine the competition law framework to see whether the laws as they exist are robust enough to deal with any possible anti competitive practices that may be prevalent in the Adtech sector.
The research was reviewed by Pallavi Bedi, it can be accessed here.