Blog
The Centre for Internet and Society’s comments and recommendations to the: The Digital Data Protection Bill 2022
High Level Comments
1. Rationale for removing the distinction between personal data and sensitive personal data is unclear.
All the earlier iterations of the Bill as well as the rules made under Section 43A of the Information Technology Act, 2000[1] had classified data into two categories; (i) personal data; and (ii) sensitive personal data. The 2022 version of the Bill has removed this distinction and clubbed all personal data under one umbrella heading of personal data. The rationale for this is unclear, as sensitive personal data means such data which could reveal or be related to eminently private data such as financial data, health data, sexual orientations and biometric data. Considering the sensitive nature of the data, the data classified as sensitive personal data is accorded higher protection and safeguards from processing, therefore by clubbing all data as personal data, the higher protection such as the need for explicit consent to the processing of sensitive personal data, the bar on processing of sensitive personal data for employment purposes has also been removed.
2. No clear roadmap for the implementation of the Bill
The 2018 Bill had specified a roadmap for the different provisions of the Bill to come into effect from the date of the Act being notified.[2] It specifically stated the time period within which the Authority had to be established and the subsequent rules and regulations notified.
The present Bill does not specify any such blueprint; it does not provide any details on either when the Bill will be notified or the time period within which the Board shall be established and specific Rules and regulations notified. Considering that certain provisions have been deferred to Rules that have to be framed by the Central government, the absence and/or delayed notification of such rules and regulations will impact the effective functioning of the Bill. Provisions such as Section 10(1) which deals with verifiable parental consent for data of children, Section 13 (1) which states the manner in which a Data Principal can initiate a right to correction, the process of selection and functioning of consent manager under 3(7) are few such examples, that when the Act becomes applicable, the data principal will have to wait for the Rules to Act of these provisions, or to get clarity on entities created by the Act.
The absence of any sunrise or sunset provision may disincentivise political or industrial will to support or enforce the provisions of the Bill. An example of such a lack of political will was the establishment of the Cyber Appellate Tribunal. The tribunal was established in 2006 to redress cyber fraud. However, it was virtually a defunct body from 2011 onwards when the last chairperson retired. It was eventually merged with the Telecom Dispute Settlement and Appellate Tribunal in 2017.
We recommend that Bill clearly lays out a time period for the implementation of the different provisions of the Bill, especially a time frame for the establishment of the Board. This is important to give full and effective effect to the right of privacy of the individual. It is also important to ensure that individuals have an effective mechanism to enforce the right and seek recourse in case of any breach of obligations by the data fiduciaries.
The Board must ensure that Data Principals and Fiduciaries have sufficient awareness of the provisions of this Bill before bringing the provisions for punishment into force. This will allow the Data Fiduciaries to align their practices with the provisions of this new legislation and the Board will also have time to define and determine certain provisions that the Bill has left the Board to define. Additionally enforcing penalties for offenses initially must be in a staggered process, combined with provisions such as warnings, in order to allow first time and mistaken offenders which now could include data principals as well, from paying a high price. This will relieve the fear of smaller companies and startups and individuals who might fear processing data for the fear of paying penalties for offenses.
3. Independence of Data Protection Board of India.
The Bill proposes the creation of the Data Protection Board of India (Board) in place of the Data Protection Authority. In comparison with the powers of the Board with the 2018 and 2019 version of Personal Data Protection Bill, we witness an abrogation of powers of the Board to be created, in this Bill. Under Clause 19(2), the strength and composition of the Board, the process of selection, the terms and conditions of appointment and service, and the removal of its Chairperson and other Members shall be such as may be prescribed by the Union Government at a later stage. Further as per Clause 19(3), the Chief Executive of the Board will be appointed by the Union Government and the terms and conditions of her service will also be determined by the Union Government. The functions of the Board have also not been specified under the Bill, the Central Government may assign the functions to be performed by the Board.
In order to govern data protection effectively, there is a need for a responsive market regulator with a strong mandate, ability to act swiftly, and resources. The political nature of personal data also requires that the governance of data, particularly the rule-making and adjudicatory functions performed by the Board are independent of the Executive.
Chapter Wise Comments and Recommendations
CHAPTER I- PRELIMINARY
● Definition: While the Bill has added a few new definitions to the Bill including terms such as gains, loss, consent manager etc. there are a few key definitions that have been removed from the earlier versions of the Bill. The removal of certain definitions in the Bill, eg. sensitive personal data, health data, biometric data, transgender status, creating a legal uncertainty about the application of the Bill.
With respect to the existing definitions as well the definition of the term ‘harm’ has been significantly reduced to remove harms such as surveillance from the ambit of harms. In addition, with respect of the definition of the term of harms also, the 2019 version of the Bill under Clause 2 (20) the definition provides a non exhaustive list of harms, by using the phrase “harms include”, however in the new definition the phrase has been altered to “harm”, in relation to a Data Principal, means”, thereby removing the possibility of more harms that are not apparent currently from being within the purview of the Act. We recommend that the definition of harms be made into a non-exhaustive list.
CHAPTER II - OBLIGATIONS OF DATA FIDUCIARY
Notice: The revised Clause on notice does away with the comprehensive requirements which were laid out under Clause 7 of the PDP Bill 2019. The current clause does not mention in detail what the notice should contain, while stating that that the notice should be itemised. While it can be reasoned that the Data Fiduciary can find the contents of the notice throughout the bill, such as with the rights of the Data Principal, the removal of a detailed list could create uncertainty for Data Fiduciaries. By leaving the finer details of what a notice should contain, it could cause Data Fiduciaries from missing out key information from the list, which in turn provide incomplete information to the Data Principal. Even in terms of Data Fiduciaries they might not know if they are complying with the provisions of the bill, and could result in them invariably being penalised. In addition to this by requiring less work by the Data Fiduciary and processor, the burden falls on the Data Principal to make sure they know how their data is processed and collected. The purpose of this legislation is to create further rights for individuals and consumers, hence the Bill should strive to put the individual at the forefront.
In addition to this Clause 6(3) of the Bill states “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English or any language specified in the Eighth Schedule to the Constitution of India.” While the inclusion of regional language notices is a welcome step, we suggest that the text be revised as follows “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English and in any language specified in the Eighth Schedule to the Constitution of India.” While the main crux of notice is to let the person know before giving consent, notice in a language that a person cannot read would not lead to meaningful consent.
Consent
Clause 3 of the Bill states “request for consent would have the contact details of a Data Protection Officer, where applicable, or of any other person authorised by the Data Fiduciary to respond to any communication from the Data Principal for the purpose of exercise of her rights under the provisions of this Act.” Ideally this provision should be a part of the notice and should be mentioned in the above section. This is similar to Clause 7(1)(c) of the draft Personal Data Protetion Bill 2019 which requires the notice to state “the identity and contact details of the data fiduciary and the contact details of the data protection officer, if applicable;”.
Deemed Consent
The Bill introduces a new type of consent that was absent in the earlier versions of the Bill. We are of the understanding that deemed consent is used to redefine non consensual processing of personal data. The use of the term deemed consent and the provisions under the section while more concise than the earlier versions could create more confusion for Data Principals and Fiduciaries alike. The definition and the examples do not shed light on one of the key issues with voluntary consent - the absence of notice. In addition to this the Bill is also silent on whether deemed consent can be withdrawn or if the data principal has the same rights as those that come from processing of data they have consented to.
Personal Data Protection of Children
The age to determine whether a person has the ability to legally consent in the online world has been intertwined with the age of consent under the Indian Contract Act; i.e. 18 years. The Bill makes no distinction between a 5 year old and a 17 year old- both are treated in the same manner. It assumes the same level of maturity for all persons under the age of 18. It is pertinent to note that the law in the offline world does recognise that distinction and also acknowledges the changes in the level of maturity. As per Section 82 of the Indian Penal Code read with Section 83, any act by a child under the age of 12 shall not be considered as an offence. While the maturity of those aged between 12–18 years will be decided by court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industry
There is a need to evaluate and rethink the idea that children are passive consumers of the internet and hence the consent of the parent is enough. Additionally, the bracketing of all individuals under the age of 18 as children fails to look at how teenages and young people use the internet. This is more important looking at the 2019 data which suggests that two-thirds of India’s internet users are in the 12–29 years age group, with those in the 12–19 age group accounting for about 21.5% of the total internet usage in metro cities. Given that the pandemic has compelled students and schools to adopt and adapt to virtual schools, the reliance on the internet has become ubiquitous with education. Out of an estimated 504 million internet users, nearly one-third are aged under 19. As per the Annual Status on Education Report (ASER) 2020, more than one-third of all schoolchildren are pursuing digital education, either through online classes or recorded videos.
Instead of setting a blanket age for determining valid consent, we could look at alternative means to determine the appropriate age for children at different levels of maturity, similar to what had been developed by the U.K. Information Commissioner’s Office. The Age Appropriate Code prescribes 15 standards that online services need to follow. It broadly applies to online services "provided for remuneration"—including those supported by online advertising—that process the personal data of and are "likely to be accessed" by children under 18 years of age, even if those services are not targeted at children. This includes apps, search engines, social media platforms, online games and marketplaces, news or educational websites, content streaming services, online messaging services.
The reservation to definition of child under the Bill has also been expressed by some members of the JPC through their dissenting opinion. MP Ritesh Pandey stated that keeping in mind the best interest of the child the Bill should consider a child to be a person who is less than 14 years of age. This would ensure that young people could benefit from the advances in technology without parental consent and reduce the social barriers that young women face in accessing the internet. Similarly Manish Tiwari in his dissenting note also observed that the regulation of the processing of data of children should be based on the type of content or data. The JPC Report observed that the Bill does not require the data fiduciary to take fresh consent of the child, once the child has attained the age of majority, and it also does not give the child the option to withdraw their consent upon reaching the majority age. It therefore, made the following recommendations:
Registration of data fiduciaries, exclusively dealing with children’s data. Application of the Majority Act to a contract with a child. Obligation of Data fiduciary to inform a child to provide their consent, three months before such child attains majority Continuation of the services until the child opts out or gives a fresh consent, upon achieving majority. However, these recommendations have not been incorporated into the provisions of the Bill. In addition to this the Bill is silent on the status of non consensual processing and deemed consent with respect to the data of children.
We recommend that fiduciaries who have services targeted at children should be considered as significant Data Fiduciaries. In addition to this the Bill should also state that the guardians could approach the Data Protection Board on behalf of the child. With these obligations in place, the age of mandatory consent could be reduced and the data fiduciary could have an added responsibility of informing the children in the simplest manner how their data will be used. Such an approach places a responsibility on Data Fiduciaires when implementing services that will be used by children and allows the children to be aware of data processing, when they are interacting with technology.
Chapter III-RIGHTS AND DUTIES OF DATA PRINCIPAL
Rights of Data Principal
Clause 12(3) of the Bill while providing the Data Principal the right to be informed of the identities of all the Data Fiduciaries with whom the personal data has been shared, also states that the data principal has the right to be informed of the categories of personal data shared. However the current version of the Bill provides only one category of data that is personal data.
Clause 14 of the Bill talks about the Right of Grievance Redressal, and states that the Data Principal has the right to readily available means of registering a grievance, however the Bill does not provide in the Notice provisions the need to mention details of a grievance officer or a grievance redressal mechanism. It is only the additional obligations on significant data fiduciary that mentions the need for a Data Protection officer to be the contact for the grievance redressal mechanism under the provisions of this Bill. The Bill could ideally re-use the provisions of the IT Act SPDI Rules 2011 in which Section 5(7) states “Body corporate shall address any discrepancies and grievances of their provider of the information with respect to processing of information in a time bound manner. For this purpose, the body corporate shall designate a Grievance Officer and publish his name and contact details on its website. The Grievance Officer shall redress the grievances or provider of information expeditiously but within one month ' from the date of receipt of grievance.”
The above framing would not only bring clarity to the data fiduciaries on what process to follow for a grievance redressal, it also would reduce the significant burden of theBoard.
Duties of Data Principals
The Bill while entisting duties of the Data Principal states that the “Data Principal shall not register a false or frivolous grievance or complaint with a Data Fiduciary or the Board”, however it is very difficult for a Data Principal to and even for the Board to determine what constitutes a “frivolous grievance”. In addition to this the absence of a defined notice provision and the inclusion of deemed consent would mean that the Data Fiduciary could have more information about the matter than the Data Principal. This could mean that the fiduciary could prove that a claim was false or frivolous. Clause 21(12) states that “At any stage after receipt of a complaint, if the Board determines that the complaint is devoid of merit, it may issue a warning or impose costs on the complainant.” In addition to this Clause 25(1) states that “ If the Board determines on conclusion of an inquiry that non- compliance by a person is significant, it may, after giving the person a reasonable opportunity of being heard, impose such financial penalty as specified in Schedule 1, not exceeding rupees five hundred crore in each instance.” The use of the term “person” in this case includes data which could mean that they could be penalised under the provisions of the Bill, which could also include not complying with the duties.
CHAPTER IV- SPECIAL PROVISIONS
Transfer of Personal Data outside India
Clause 17 of the Bill has removed the requirement of data localisation which the 2018 and 2019 Bill required. Personal data can be transferred to countries that will be notified by the central government. There is no need for a copy of the data to be stored locally and no prohibition on transferring sensitive personal data and critical data. Though it is a welcome change that personal data can be transferred outside of India, we would highlight the concerns in permitting unrestricted access to and transfer of all types of data. Certain data such as defence and health data do require sectoral regulation and ringfencing of the transfer of data.
Exemptions
Clause 18 of the Bill has widened the scope of government exemptions. Blanket exemption has been given to the State under Clause 18(4) from deleting the personal data even when the purpose for which the data was collected is no longer served or when retention is no longer necessary. The requirement of proportionality, reasonableness and fairness have been removed for the Central Government to exempt any department or instrumentality from the ambit of the Bill. By doing away with the four pronged test, this provision is not in consonance with test laid down by the Supreme Court and are also incompatible with an effective privacy regulation. There is also no provision for either a prior judicial review of the order by a district judge as envisaged by the Justice Srikrishna Committee Report or post facto review by an oversight committee of the order as laid down under the Indian Telegraph Rules, 1951[3] and the rules framed under Information Technology Act[4]. The provision states that such processing of personal data shall be subject to the procedure, safeguard and oversight mechanisms that may be prescribed.
[1] Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.
[2] Clause 97 of the 2018 Bill states“(1) For the purposes of this Chapter, the term ‘notified date’ refers to the date notified by the Central Government under sub-section (3) of section 1. (2)The notified date shall be any date within twelve months from the date of enactment of this Act. (3)The following provisions shall come into force on the notified date-(a) Chapter X; (b) Section 107; and (c) Section 108. (4)The Central Government shall, no later than three months from the notified date establish the Authority. (5)The Authority shall, no later than twelve months from the notified date notify the grounds of processing of personal data in respect of the activities listed in sub-section (2) of section 17. (6) The Authority shall no, later than twelve months from the date notified date issue codes of practice on the following matters-(a) notice under section 8; (b) data quality under section 9; (c) storage limitation under section 10; (d) processing of personal data under Chapter III; (e) processing of sensitive personal data under Chapter IV; (f) security safeguards under section 31; (g) research purposes under section 45;(h) exercise of data principal rights under Chapter VI; (i) methods of de-identification and anonymisation; (j) transparency and accountability measures under Chapter VII. (7)Section 40 shall come into force on such date as is notified by the Central Government for the purpose of that section.(8)The remaining provision of the Act shall come into force eighteen months from the notified date.”
[3] Rule 419A (16): The Central Government or the State Government shall constitute a Review Committee.
Rule 419 A(17): The Review Committee shall meet at least once in two months and record its findings whether the directions issued under sub-rule (1) are in accordance with the provisions of sub-section (2) of Section 5 of the said Act. When the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above it may set aside the directions and orders for destruction of the copies of the intercepted message or class of messages.
[4] Rule 22 of Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009: The Review Committee shall meet at least once in two months and record its findings whether the directions issued under rule 3 are in accordance with the provisions of sub-section (2) of section 69 of the Act and where the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above, it may set aside the directions and issue an order for destruction of the copies, including corresponding electronic record of the intercepted or monitored or decrypted information.
Comments to the proposed amendments to The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
Preliminary
In these comments, we examine the constitutional validity of the proposed amendments, as well as whether the language of the amendments provide sufficient clarity for its intended recipients. This commentary is in-line with CIS’ previous engagement with other iterations of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
General Comments
Ultra vires the parent act
Section 79(1) of the Information Technology (IT) Act states that the intermediary will not be held liable for any third-party information if the intermediary complies with the conditions laid out in Section 79(2). One of these conditions is that the intermediary observe “due diligence while discharging his duties under this Act and also observe such other guidelines as the Central Government may prescribe in this behalf.” Further, Section 87(2)(zg) empowers the central government to prescribe “guidelines to be observed by the intermediaries under sub-section (2) of section 79.”
A combined reading of Section 79(2) read with Section 89(2)(zg) makes it clear that the power of the Central Government is limited to prescribing guidelines related to the due diligence to be observed by the intermediaries while discharging its duties under the IT Act. However, the proposed amendments extend the original scope of the provisions within the IT Act.
In particular, the IT Act does not prescribe for any classification of intermediaries. Section 2(1) (w) of the Act defines intermediaries as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes”. Intermediaries are treated and regarded as a single monolithic entity with the same responsibilities and obligations.
The proposed amendments have now established a new category of intermediaries, namely online gaming intermediary. This classification comes with additional obligations, codified within Rule 4A of the proposed amendments, including enabling the verification of user-identity and setting up grievance redressal mechanisms. The additional obligations placed on online gaming intermediaries find no basis in the IT Act, which does not specify or demarcate between different categories of intermediaries.
The 2021 Rules have been prescribed under Section 87(1) and Section 87(2)(z) and (zg) of the IT Act. These provisions do not empower the Central Government to make any amendment to Section 2(w) or create any classification of intermediaries. As has been held by the Supreme Court in State of Karnataka and Another v. Ganesh Kamath & Ors that: “It is a well settled principle of interpretation of statutes that conferment of rule making power by an Act does not enable the rule making authority to make a rule which travels beyond the scope of the enabling Act or which is inconsistent therewith or repugnant thereto.” In this light, we argue that the proposed amendment cannot go beyond the parent act or prescribe policies in the absence of any law/regulation authorising them to do so.
Recommendation
We recommend that a regulatory intervention seeking to classify intermediaries and prescribe regulations specific to the unique nature of specific intermediaries should happen through an amendment to the parent act. The amendment should prescribe additional responsibilities and obligations of online gaming intermediaries.
A note on the following sections
Since the legality of classifying intermediaries into further categories is under question, our subsequent discussions on the language of the provisions related to online gaming intermediary are recommended to be taken into account for formulating any new legislations relating to these entities.
Specific comments
Fact checking amendment
Amendment to Rule 3(1)(b)(v) states that intermediaries are obligated to ask their users to not host any content that is, inter alia, “identified as fake or false by the fact check unit at the Press Information Bureau of the Ministry of Information and Broadcasting or other agency authorised by the Central Government for fact checking”.
Read together with Rule 3(1)(c), which gives intermediaries the prerogative to terminate user access to their resources on non-compliance with their rules and regulations, Rule 3(1)(b)(v) essentially affirms the intermediary’s right to remove content that the Central government deems to be ‘fake’. However, in the larger context of the intermediary liability framework of India, where intermediaries found to be not complying with the legal framework of section 79 lose their immunity, provisions such as Rule 3(1)(b)(v) compel intermediaries to actively censor content, on the apprehension of legal sanctions.
In this light, we argue that Rule 3(1)(b)(v) is constitutionally invalid, inasmuch that Article 19(2), which prescribes grounds under which the government restrict the right to free speech, does not permit restricting speech on the ground that it is ostensibly “fake or false”. In addition, the net effect of this rule would be that the government would be the ultimate arbiter of what is considered ‘truth’, and every contradictions to this narrative would be deemed to be false. In a democratic system like India’s, this cannot be a tenable position, and would go against a rich jurisprudence of constitutional history on the need for plurality.
For instance, in Indian Express Newspapers v Union of India, the Supreme Court had held that ‘the freedom of the press rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.’ Applying this interpretation to the present case, it could be said that the government’s monopoly on directing what constitutes “fake or false” in the online space would prevent citizens from accessing dissenting voices and counterpoints to government policies .
This is problematic when one considers that in the Indian context, freedom of speech and expression has always been valued for its instrumental role in ensuring a healthy democracy, and its power to influence public opinion. In the present case, the government, far from facilitating any such condition, is instead actively indulging in guardianship of the public mind (Sarkar et al, 2019).
Other provisions in the IT Act which permit for censorship of content, including section 69A, permit the government to only do so when content is relatable to grounds enumerated in Article 19(2) of the Constitution. In addition, in the case of Shreya Singhal vs Union of India, where, the constitutionality of section 69A was challenged, the Supreme Court upheld the provision because of the legal safeguards inherent in the provision, including offering a hearing to the originator of the impugned content and reasons for censoring content to be recorded in writing.
In contrast, a fact check by the Press Information Bureau or by another authorised agency provides no such safeguards, and does not relate to any constitutionally recognized ground for restricting speech.
Recommendation
The proposed amendment to Rule 3(1)(b)(v) is unconstitutional, and should be removed from the final draft of the law.
Clarifications are needed for online games rules definitions
The definitions of an "online game" and "online gaming intermediary" are currently extremely unclear and require further clarification.
As the proposed amendments stand, online games are characterised by the user's “deposit with the expectation of earning winnings”. Both deposit and winnings can be “cash” or “in kind", which does not adequately draw a boundary on the type of games this amendment seeks to cover. Can the time invested by the player in playing a game be answered under the “in kind” definition of deposit? If the game provides a virtual in-game currency that can be exchanged for internal power ups, even if there are no cash or gift cards used as payout, is that considered to be an “in kind” winnings? The rules, as currently drafted, are vague in their reference towards “in kind” deposits and payouts.
This definition of online games also does not differentiate between single or multiplayer games, and traditional games like chess which have found an audience online such as Candy Crush (single player), Minecraft (multiplayer collaborative) or chess (traditional). It is unclear whether these games were intended to fall within the purview of these amendments to the rules, and if they are all subjected to the same due diligence requirements as pay-to-play games. This, in conjunction with the proposed rule 6A which allows the Ministry to term any other game as an online game for the purposes of the rules, also provides them with broad, unpredictable powers . This ambiguity hinders clear comprehension of the expectations among the target stakeholders, thus affecting the consistency and predictability of the implementation of the rules.
Similarly, "online gaming intermediaries" are also defined very broadly as "intermediary that offers one or more than one online game". As defined, any intermediary that even hosts a link to a game is classified as an online gaming intermediary since the game is now "offered" through the intermediary. As drafted, there does not seem to be a material distinction between an "intermediary" as defined by the act and "online gaming intermediary" as specified by these rules.
Recommendation
We recommend further clarification on the definitions of these terms, especially for “in kind” and “offers” which are currently extremely vague terms that provide overbroad powers to the Ministry.
Intermediaries and Games
"Online gaming intermediaries" are defined very broadly as "intermediary that offers one or more than one online game". Intermediaries are defined in the Act as "any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message".
According to the media coverage (Barik, 2023) around these amendments, it seems that there is an effort to classify gaming companies as "online gaming intermediaries" but the language of the drafted amendments do not support this. An “intermediary” status is given to a company due to its functional role in primarily offering third party content. It is not a classification for different types of internet companies that exist and thus must not be used to make rules for entities that do not perform this function.
Not all gaming companies present a collection of games for their users to play. According to the drafted definition multiple platforms where games might be present like, an app stores where multiple game developers can publish their games for access by users, a website that lists links to online games, a social media platform that acts as an intermediary between two users exchanging links to games, as well as websites that host games for users to directly access may all be classified as an "online gaming intermediary" since they "offer" games to users. These are a rather broad range of companies and functions to be singularly classified an "online gaming intermediary".
Recommendation
We recommend a thoroughly researched legislative solution to regulating gaming companies that operate online rather than through amendments to intermediary rules. If some companies are indeed to be classified as “online gaming intermediaries”, there is a need for further reasoning on which type of gaming companies and their functions are intermediary functions for the purposes of these Rules.
Comments can be downloaded here
Civil Society’s second opinion on a UHI prescription
The article originally published by Internet Freedom Foundation can be accessed here.
The National Health Authority (NHA) released the Consultation Paper on Operationalising Unified Health Interface (UHI) in India on December 14, 2022. The deadline for submission of comments was January 13, 2023. We collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to submit comments on the paper.
Background
The UHI is proposed to be a “foundational layer of the Ayushman Bharat Digital Health Mission (ABDM)” and is “envisioned to enable interoperability of health services in India through open protocols”. The ABDM, previously known as the National Digital Health Mission, was announced by the Prime Minister on the 74th Independence Day, and it envisages the creation of a National Digital Health Ecosystem with six key features: Health ID, Digi Doctor, Health Facility Registry, Personal Health Records, Telemedicine, and e-Pharmacy. After launching the programme in six Union Territories, the National Health Authority issued a press release on August 26, 2020 announcing the public consultation for the Draft Health Data Management Policy for NDHM. While the government has repeatedly claimed that creation of a health ID is purely voluntary, contrary reports have emerged. In our comments as part of the public consultation, our primary recommendation was that deployment of any digital health ID programme must be preceded by the enactment of general and sectoral data protection laws by the Parliament of India; and meaningful public consultation which reaches out to vulnerable groups which face the greatest privacy risks.
As per the synopsis document which accompanies the consultation paper, it aims to “seek feedback on how different elements of UHI should function. Inviting public feedback will allow for early course correction, which will in-turn engender trust in the network and enhance market adoption. The feedback received through this consultation will be used to refine the functionalities of UHI so as to limit any operational issues going forward.” The consultation paper contains a set of close-ended questions at the end of each section through which specific feedback has been invited from interested stakeholders. We have collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to draft the comments on this consultation paper.
Our main concern relates to the approach the Government of India and concerned Ministries adopt to draft a consultation paper without explicitly outlining how the proposed UHI fits into the broader healthcare ecosystem and quantifying how it improves it rendering the consultation paper and public engagement efforts inadequate. Additionally, it doesn’t allow the public at large, and other stakeholders to understand how it may contribute to people’s access to quality care towards ensuring realisation of their constitutional right to health and health care. The close-ended nature of the consultation process, wherein specific questions have been posed, restricts stakeholders from questioning the structure of the ABDM itself and forces us to engage with its parts, thereby incorrectly assuming that there is support for the direction in which the ABDM is being developed.
Our submissions
A. General comments
a. Absence of underlying legal framework
Ensuring health data privacy requires legislation at three levels- comprehensive laws, sectoral laws and informal rules. Here, the existing proposal for the data protection legislation, i.e., the draft Digital Personal Data Protection Bill, 2022 (DPDPB, 2022) which could act as the comprehensive legal framework, is inadequate to sufficiently protect health data. This inadequacy arises from the failure of the DPDPB, 2022 to give higher degree of protection to sensitive personal data and allowing for non-consensual processing of health data in certain situations under Clause 8 which relates to “deemed consent”. Here, it may also be noted that the DPDPB, 2022 fails to specifically define either health or health data. Further, the proposed Digital Information Security in Healthcare Act, 2017, which may have acted as a sectoral law, is presently before the Parliament and has not been enacted. Here, the absence of safeguards allows for data capture by health insurance firms and subsequent exclusion/higher costs for vulnerable groups of people. Similarly, such data capture by other third parties potentially leads to commercial interests creeping in at the cost of users of health care services and breach of their privacy and dignity.
b. Issues pertaining to scope
Clarity is needed on whether UHI will be only providing healthcare services through private entities, or will also include the public health care system and various health care schemes and programs of the government, such as eSanjeevani.
c. Pre-existing concerns
- Exclusion: Access to health services through the Unified Health Interface should not be made contingent upon possessing an ABHA ID, as alluded to in the section on ‘UHI protocols in action: An example’ under Chapter 2(b). Such an approach is contrary to the Health Data Management Policy that is based on individual autonomy and voluntary participation. Clause 16.4 of the Policy clearly states that nobody will “be denied access to any health facility or service or any other right in any manner by any government or private entity, merely by reason of not creating a Health ID or disclosing their Health ID…or for not being in possession of a Health ID.” Moreover, the National Medical Commission Guidelines for Telemedicine in India also does not create any obligation for the patient to possess an ABHA ID in order to access any telehealth service. The UHI should explicitly state that a patient can log in on the network using any identification and not just ABHA.
- Consent: As per media reports, registration for a UHID under the NDHM, which is an earlier version of the ABHA number under the ABDM, may have been voluntary on paper but it was being made mandatory in practice by hospital administrators and heads of departments. Similarly, reports suggest that people who received vaccination against COVID-19 were assigned a UHID number without their consent or knowledge.
- Function creep: In the absence of an underlying legal framework, concerns also arise that the health data under the NDHM scheme may suffer from function creep, i.e., the collected data being used for purposes other than for which consent has been obtained. These concerns arise due to similar function creep taking place in the context of data collected by the Aarogya Setu application, which has now pivoted from being a contact-tracing application to “health app of the nation”. Here, it must be noted that as per a RTI response dated June 8, 2022 from NIC, the Aarogya Setu Data Access And Knowledge Sharing Protocol “has been discontinued".
- Issues with the United Payments Interface may be replicated by the UHI: The consultation paper cites the United Payments Interface (UPI) as “strong public digital infrastructure” which the UHI aims to leverage. However, a trend towards market concentration can be witnessed in UPI: the two largest entities, GooglePay and PhonePe, have seen their market share hover around 35% and 47% (by volume) for some time now (their share by value transacted is even higher). Meanwhile, the share of the NPCI’s own app (BHIM) has fallen from 40% in August 2017 to 0.74% in September 2021. Thus, if such a model is to be adopted, it is important to study the UPI model to understand such threats and ensure that a similar trend towards oligopoly or monopoly formation in UHI is addressed. This is all the more important in a country in which the decreasing share of the public health sector has led to skyrocketing healthcare costs for citizens.
B. Our response also addressed specific questions about search and discovery, service booking, grievance redressal, and fake reviews and scores. Our responses on these questions can be found in our comments here.
Our previous submissions on health data
We have consistently engaged with the government since the announcement of the NDHM in 2020. Some of our submissions and other outputs are linked below:
- IFF’s comment on the Draft Health Data Management Policy dated May 21, 2022 (link)
- IFF’s comments on the consultation Paper on Healthcare Professionals Registry dated July 20, 2021 (link)
- IFF and C-HELP Working Paper: ‘Analysing the NDHM Health Data Management Policy’ dated June 11, 2021 (link)
- IFF’s Consultation Response to Draft Health Data Retention Policy dated January 6, 2021 (link)
- IFF’s comments on the National Digital Health Mission’s Health Data Management Policy dated September 21, 2020 (link)
Important documents
- Response on the Consultation Paper on Operationalising Unified Health Interface (UHI) in India by Centre for Health Equity, Law & Policy, the Centre for Internet & Society, the Forum for Medical Ethics Society, & IFF dated January 13, 2023 (link)
- NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
- Synopsis of NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
CensorWatch: On the Implementation of Online Censorship in India
Abstract: State authorities in India order domestic internet service providers (ISPs) to block access to websites and services. We developed a mobile application, CensorWatch, that runs network tests to study inconsistencies in how ISPs conduct censorship. We analyse the censorship of 10,372 sites, with measurements collected across 71 networks from 25 states in the country. We find that ISPs in India rely on different methods of censorship with larger ISPs utilizing methods that are harder to circumvent. By comparing blocklists and contextualising them with specific legal orders, we find concrete evidence that ISPs in India are blocking different websites and engaging in arbitrary blocking, in violation of Indian law.
The paper authored by Divyank Katira, Gurshabad Grover, Kushagra Singh and Varun Bansal appeared as part of the conference on Free and Open Communications on the Internet (FOCI '23) and can be accessed here.
The authors would like to thank Pooja Saxena and Akash Sheshadri for contributing to the visual design of Censorwatch; Aayush Rathi, Amber Sinha and Vipul Kharbanda for their valuable legal inputs; Internet Freedom Foundation for their support; ipinfo.io for providing free access to their data and services. The work was made possible because of research grants to the Centre for Internet and Society from the MacArthur Foundation, Article 19, the East-West Management Institute and the New Venture Fund. Gurshabad Grover’s contributions were supported by a research fellowship from the Open Tech Fund.
CoWIN Breach: What Makes India's Health Data an Easy Target for Bad Actors?
The article was originally published in the Quint on 19 June 2023.
Last week, it was reported that due to an alleged breach of the CoWIN platform, details such as Aadhaar and passport numbers of Indians were made public via a Telegram bot.
While Minister of State for Information Technology Rajeev Chandrashekar put out information acknowledging that there was some form of a data breach, there is no information on how the breach took place or when a past breach may have taken place.
This data leak is yet another example of our health records being exposed in the recent past – during the pandemic, there were reports of COVID-19 test results being leaked online. The leaked information included patients’ full names, dates of birth, testing dates, and names of centres in which the tests were held.
In December last year, five servers of the All India Institute of Medical Science (AIIMS) in Delhi were under a cyberattack, leaving sensitive personal data of around 3-4 crore patients compromised.
In such cases, the Indian Computer Emergency Response Team (CERT-In) is the agency responsible for looking into the vulnerabilities that may have led to them. However, till date, CERT-In has not made its technical findings into such attacks publicly available.
The COVID-19 Pandemic Created Opportunity
The pandemic saw a number of digitisation policies being rolled out in the health sector; the most notable one being the National Digital Health Mission (or NDHM, later re-branded as the Ayushman Bharat Digital Mission).
Mobile phone apps and web portals launched by the central and state governments during the pandemic are also examples of this health digitisation push. The rollout of the COVID-19 vaccinations also saw the deployment of the CoWIN platform.
Initially, it was mandatory for individuals to register on CoWIN to get an appointment for vaccination, and there was no option for walk-in-registration or to book an appointment. But, the Centre subsequently modified this rule and walk-in appointments and registrations on CoWIN became permissible from June 2021.
However, a study conducted by the Centre for Internet and Society (CIS) found that states such as Jharkhand and Chhattisgarh, which have low internet penetration, permitted on-site registration for vaccinations from the beginning.
The rollout of the NDHM also saw Health IDs being generated for citizens.
In several reported cases across states, this rollout happened during the COVID-19 vaccination process – without the informed consent of the concerned person.
The beneficiaries who have had their Health IDs created through the vaccination process had not been informed about the creation of such an ID or their right to opt out of the digital health ecosystem.
A Web of Health Data Policies
Even before the pandemic, India was working towards a Health ID and a health data management system.
The components of the umbrella National Digital Health Ecosystem (NDHE) are the National Digital Health Blueprint published in 2019 (NDHB) and the NDHM.
The Blueprint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs. Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.
The National Health Authority (NHA), established in 2018, has been given the responsibility of implementing the National Digital Health Mission.
2018 also saw the Digital Information Security in Healthcare Act (DISHA), which was to regulate the generation, collection, access, storage, transmission, and use of Digital Health Data ("DHD") and associated personal data.
However, since its call for public consultation, no progress has been made on this front.
In addition to documents that chalk out the functioning and the ecosystem of a digitised healthcare system, the NHA has released policy documents such as:
-
the Health Data Management Policy (which was revised three times; the latest version released in April 2022)
-
the Health Data Retention Policy (released in April 2021)
-
Consultation paper on the Unified Health Interface (UHI) (released in December 2022)
Along with these policies, in 2022, the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) – India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines, published in August 2022, did not even refer to the draft National Digital Health Data Management Policy (published in April 2022).
Interestingly, the recent health data policies do not mention CoWIN. Failing to cross-reference or mention preceding policies creates a lack of clarity on which documents are being used as guidelines by healthcare providers.
Can a Data Protection Bill Be the Solution?
The draft Data Protection Bill, 2021, defined health data as “…the data related to the state of physical or mental health of the data principal and includes records regarding the past, present or future state of the health of such data principal, data collected in the course of registration for, or provision of health services, data associated with the data principal to the provision of specific health services.”
However, this definition as well as the definition of sensitive personal data was removed from the current version of the Bill (Digital Personal Data Protection Bill, 2022).
Omitting these definitions from the Bill removes a set of data which, if collected, warrants increased responsibility and increased liability. Handling of health data, financial data, government identifiers, etc, need to come with a higher level of responsibility as they are a list of sensitive details of a person.
The threats posed as a result of this data being leaked are not limited to spam messages or fraud and impersonation, but also of companies that can get a hand on this coveted data and gather insights and train their systems and algorithms, without the need to seek consent from anyone, or without facing the consequences of harm caused.
While the current version of the draft DPDP Bill states that the data fiduciary shall notify the data principal of any breach, the draft Bill also states that the Data Protection Board “may” direct the data fiduciary to adopt measures that remedy the breach or mitigate harm caused to the data principal.
The Bill also prescribes penalties of upto Rs 250 crore if the data fiduciary fails to take reasonable security safeguards to prevent a personal data breach, and a penalty of upto Rs 200 crore if the fiduciary fails to notify the data protection board and the data principal of such breach.
While these steps, if implemented through legislation, would make organisations processing data take their data security more seriously, the removal of sensitive personal data from the definition of the Bill, would mean that data fiduciaries processing health data will not have to take additional steps other than reasonable security safeguards.
The absence of a clear indication of security standards will affect data principals and fiduciaries.
Looking to bring more efficiency to governance systems, the Centre launched the Digital India Mission in 2015. The press release by the central government reporting the approval of the programme by the Cabinet of Ministers speaks of ‘cradle to grave’ digital identity as one of its vision areas.
The ambitious Universal Health ID and health data management policies are an example of this digitisation mission.
However breaches like this are reminders that without proper data security measures, and a system for having a person responsible for data security, the data is always vulnerable to an attack.
While the UK and Australia have also seen massive data breaches in the past, India is at the start of its health data digitisation journey and has the ability to set up strong security measures, employ experienced professionals, and establish legal resources to ensure that data breaches are minimised and swift action can be taken in case of a breach.
The first step to understand the vulnerabilities would be to present the CERT-In reports of this breach, and guide other institutions to check for the same so that they are better prepared for future breaches and attacks.
Health Data Management Policies - Differences Between the EU and India
This issue brief was reviewed and edited by Pallavi Bedi
Introduction
Health data has seen an increased interest the world over, on account of the amount of information and inferences that can be drawn not just about a person but also about the population in general. The Covid 19 pandemic also brought about an increased focus on health data, and brought players that earlier did not collect health data to be required to collect such data, including offices and public spaces. This increased interest has led to further thought on how health data is regulated and a greater understanding of the sensitivity of such data, because of which countries are in varying processes to get health data regulated over and above the existing data protection regulations. The regulations not only look at ensuring the privacy of the individual but also look at ways in which this data can be shared with companies, researchers and public bodies to foster innovation and to monetise this valuable data. However for a number of countries the effort is still on the digitisation of health data. India has been in the process of implementing a nationwide health ID that can be used by a person to get all their medical records in one place. The National Health Authority (NHA) has also since 2017 been publishing policies that look at the framework and ecosystem of health data, as well as the management and sharing of health data. However these policies and a scattered implementation of the health ID are being carried out without a data protection legislation in place. In comparison, Europe, which already has an established health Id system, and a data protection legislation (GDPR) is looking at the next stage of health data management through the EU Health Data Space (EUHDS). Through this issue brief we would like to highlight the differences in approaches to health data management taken by the EU and India, and look at possible recommendations for India, in creating a privacy preserving health data management policy.
Background
EU Health Data Space
The EU Health Data Space (EUHDS) was proposed by the EU Council as a way to create an ecosystem which combines rules, standards, practices and infrastructure, around health data under a common governance framework. The EUHDS is set to rely on two pillars; namelyMyHealth@EU and HealthData@EU, where MyHealth@EU facilitates easy flow of health data between patients and healthcare professionals within member states, the HealthData@EU,faciliates secondary use of data which allows policy makers,researchers access to health data to foster research and innovation.[1] The EUHDS aims to provide a trustworthy system to access and process health data and builds up from the General Data Protection Regulation (GDPR), proposed Data Governance Act.[2]
India’s health data policies:
The last few years has seen a flurry of health policies and documents being published and the creation of a framework for the evolution of a National Digital Health Ecosystem (NDHE). The components for this ecosystem were the National Digital Health Blueprint published in 2019 (NDHB) and the National Digital Health Mission (NDHM). The BluePrint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs.[3] Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.[4]
The National Health Authority (NHA) established in 2018 has been given the responsibility of implementing the National Digital Health Mission. 2018 also saw the Digital Information Security in Healthcare Act (DISHA) which was to be a legislation that laid down provisions that regulate the generation, collection, access, storage, transmission and use of Digital Health Data ("DHD") and associated personal data.[5] However since its call for public consultation no progress has been made on this front.
Along with these three strategy documents the NHA has also released policy documents more particularly the Health Data Management Policy (which was revised three times; the latest version released in April 2022), the Health Data Retention Policy (released April 2021), and the Consultation Paper on Unified Health Interface (UHI) (released March 2021). Along with this in 2022 the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines published in August 2022 did not even refer to the draft National Digital Health Data Management Policy (published in April 2022). As stated through the examples above these documents do not cross-refer or mention preceding health data documents, creating a lack of clarity of which documents are being used as guidelines by health care providers.
In addition to this the Personal Data Protection Bill has been revised three times since its release in 2018. The latest version was published for public comments on November 18, 2022; the Bill has removed the distinction between sensitive personal data and personal data and clubbed all personal data under one umbrella heading of personal data. Health and health data definition has also been deleted; creating further uncertainty with respect to health data as the different policies mentioned above rely on the data protection legislation to define health data.
Comparison of the Health Data Management Approaches
Interoperability with Data Protection Legislations
At the outset the key difference between the EU and India’s health data management policies has been the legal backing of GDPR which the EUHDS has. EUHDS has a strong base in terms of rules for privacy and data protection as it follows, draws inference and works in tandem with the General Data Protection Regulation (GDPR). The provisions also build upon legislation such as Medical Devices Regulation and the In Vitro Diagnostics Regulation. With particular respect to GDPR the EUHDS draws from the rights set out for protection of personal data including that of electronic health data.
The Indian Health data policies however currently exist in the vacuum created by the multiple versions of the Data Protection Bill that are published and repealed or replaced. The current version called the Digital Personal Data Protection Bill 2022 seems to take a step backward in terms of health data. The current version does away with sensitive personal data (which health data was a part of) and keeps only one category of data - personal data. It can be construed that the Bill currently considers all personal data as needing the same level of protection but it is not so in practice. The Bill does not at the moment mandate more responsibilities on data fiduciaries[6] that deal with health data (something that was present in all the earlier versions of the Bill) and in other data protection legislation across different jurisdictions and leaves the creation of Significant Data Fiduciaries (who have more responsibilities) to be created by rules, based on the sensitivity of data decided by the government at a later date.[7] In addition to this the Bill does not define “health data”, the reason why this is a cause for worry is that the existing health data policies also do not define health data often relying on the definition mentioned in the versions of Data Protection Bill.
Definitions and Scope
The EUHDS defines ‘personal electronic health data’ as data concerning health and genetic data as defined in Regulation (EU) 2016/679[8], as well as data referring to determinants of health, or data processed in relation to the provision of healthcare services, processed in an electronic form. Health data by these parameters would then include not just data about the status of health of a person which includes reports and diagnosis, but also data from medical devices.
In India the Health Data Management Policy 2022, defines “Personal Health Records” (PHR) as a health record that is initiated and maintained by an individual. The policy also states that a PHR would be able to reveal a complete and accurate summary of the health and medical history of an individual by gathering data from multiple sources and making this accessible online. However there is no definition of health data which can be used by companies or users to know what comes under health data. The 2018, 2019 and 2021 version of the Data Protection Legislation had definitions of the term health data, however the 2022 version of the Bill does away with the definition.
Health data and wearable devices
One of the forward looking provisions in the EUHDS is the inclusion of devices that records health data into this legislation. This also includes the requirement of them to be added to registries to provide easy access and scrutiny. The document also requires voluntary labeling of wellness applications and registration of EHR systems and wellness applications. This is not just for the regulation point of view but also in the case of data portability, in order for people to control the data they share. In addition to this in the case where manufacturers of medical devices and high-risk AI systems declare interoperability with the EHR systems, they will need to comply with the essential requirements on interoperability under the EHDS.
In India the health data management policy 2022 while stating the applicable entities and individuals who are part of the ABDM ecosystem[9] mention medical device manufacturers, does not mention device sellers or use terms such as wellness applications or wearable devices. Currently the regulation of medical devices falls under the purview of the Drugs and Cosmetics Act, 1940 (DCA) read along with the Medical Device Rules, 2017 (MDR). However in 2020 possibly due to the pandemic the Indian Government along with the Drugs Technical Advisory Board (DTAB) issued two notifications the first one expanded the scope of medical devices which earlier was limited to only 37 categories excluding medical apps, and second one notified the Medical Device (Amendment) Rules, 2020. These two changes together brought all medical devices under the DCA as well as expanded the categories of medical devices. However it is still unclear whether fitness tracker apps that come with devices are regulated, as the rules and the DCA still rely on the manufacturer to self-identify as a medical device.[10] However, this regulatory uncertainty has not brought about any change in how this data is being used and insurance companies at times encourage people to sync their fitness tracker data.[11]
Multiple use of health data
The EUHDS states two types of uses of data: primary and secondary use of data. In the document the EU states that while there are a number of organisations collecting data, this data is not made available for purposes other than for which it was collected. In order to ensure that researchers, innovators and policy makers can use this data. the EU encourages the data holders to contribute to this effort in making different categories of electronic health data they are holding available for secondary use. The data that can be used for secondary use would also include user generated data such as from devices, applications or other wearables and digital health applications.However, the regulation cautions against using this data for measures and making decisions that are detrimental to the individual, in ways such as increasing insurance premiums. The EUHDS also states that as the data is sensitive personal data care should be taken by the data access bodies, to ensure that while data is being shared it is necessary to ensure that the data will be processed in a privacy preserving manner. This could include through pseudonymisation, anonymisation, generalisation, suppression and randomisation of personal data.
While the document states how important it is to have secondary use of the data for public health, research and innovation it also requires that the data is not provided without adequate checks. The EUHDS requires the organisation seeking access to provide several pieces of information and be evaluated by the data access body. The information should include legitimate interest, the necessity and the process the data will go through. In the case where the organisation is seeking pseudonymised data, there is a need to explain why anonymous data would not be sufficient. In order to ensure a comprehensive approach between health data access bodies, the EUHDS states that the European Commission should support the harmonisation of data application, as well as data request.
In India, while multiple health data documents state the need to share data for public interest, research and innovation, not much thought has been given to ensuring that the data is not misused and that there is harmonisation between bodies that provide the data. Most recently the PMJay documents states that the NHA shall make aggregated and anonymised data available through a public dashboard for the purpose of facilitating health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NHA. Such data can be accessed through a request to the Data Sharing Committee[12] for the sharing of such information through secure modes, including clean rooms and other such secure modes specified by NHA. However the document does not mention what clean rooms are in this context.
The Health Data Management Policy 2022 states that Data fiduciaries (data controllers/ processors according to the data protection legislation) can themselves make anonymised or de-identified data in an aggregated form available based in technical processes and anonymisation protocols which may be specified by the NDHM in consultation with the MeitY. The purposes mentioned in this policy included health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NDHMP. The policy states that in order to access the anonymised or de-identified data the entity requesting the data would have to provide relevant information such as name, purpose of use and nodal person of contact details. While the policy does not go into details about the scrutiny of the organisations seeking this data, it does state that the data will be provided based on the term as may be stipulated.
However the issue arises as both the documents published by the NHA do not have a similar process for getting the data, for example the NDHMP requires the data fiduciary to share the data directly, while the PMJay guidelines requires the data to be shared by the Data Sharing Committee, creating duplicate datasets as well as affecting the quality of the data being shared.
Recommendations for India
Need for a data protection legislation:
While the EUHDS is still a draft document and the end result could be different based on the consultations and deliberations, the document has a strong base with respect to the privacy and data protection based on the earlier regulations and the GDPR. The definitions of what counts as health data, and the parameters for managing the data creates a more streamlined process for all stakeholders. More importantly the GDPR and other regulations provide a way of recourse for people. In India the health data related policies and strategy documents have been published and enforced before the data protection legislation is passed. In addition to this India, unlike the EU has just begun looking at a universal health ID and digitisation of the healthcare system, ideally it would be better to take each step at a time, and at first look at the issues that may arise due to the universal health ID. In addition to this, multiple policies, without a strong data protection legislation providing parameters and definitions could mean that the health data management policies only benefit certain people. This also creates uncertainty in terms of where an individual will go in case of harms caused by the processing of their data, and who would be the authority to govern questions around health data. The division of health data management between different documents also creates multiple silos of data management which creates data duplication and issues with data quality.
Secondary use of data
While both the EUHDS and India's Health Data Management Policy look at the sharing of health data with researchers and private organisations in order to foster innovation, the division of sharing of data based on who uses the data is a good way to ensure that only interested parties have access to the data. With respect to the health data policies in India, a number of policies talk about the sharing of anonymised data with researchers, however the documents being scattered could cause the same data to be shared by multiple health data entities, making it possible to identify people. For example, the health data management policy could share anonymised data of health services used by a person, whereas the PMJAY policy could share data about insurance covers, and the researcher could probably match the data and be closer to identifying people. It has also been revealed in multiple studies that anonymisation of data is not permanent and that the anonymisation can be broken. This is more concerning since the polices do not put limits or checks on who the researchers are and what is the end goal of the data sought by them, the policies seem to rely on the anonymisation of the data as the only check for privacy. This data could be used to de-anonymise people, could be used by companies working with the researchers to get large amounts of data to train their systems,
train data that could lead to greater surveillance, increase insurance scrutiny etc. The NHA and Indian health policy makers could look at the restrictions and checks that the EUHDS creates for the secondary use of data and create systems of checks and categories of researchers and organisations seeking data to ensure minimal risks to an individual’s data.
Conclusion
While the EU Health data space has been criticised for facilitating vast amounts of data with private companies and the collecting of data by governments, the codification of the legislation does in some way give some way to regulate the flow of health data. While India does not have to emulate the EU and have a similar document, it could look at the best practices and issues that are being highlighted with the EUHDS. Indian lawmakers have looked at the GDPR for guidance for the draft data protection legislation, similarly it could do so with regard to health data and health data management. One possible way to ensure both the free flow of health data and the safeguards of a regulation could be to re-introduce the DISHA Act which much like the EUHDS could act as a legislation which provides an anchor to the multiple health data policies, including standard definition of health data, grievance redressal bodies, and adjudicating authorities and their functions. In addition a legislation dedicated to the health data would also remove the existing burden on the to be formed data protection authority.
[1] “European Health Data Space”, European Commission, 03 May 2022,https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en
[2]“European Health Data Space”
[3] “National Digital Health Blueprint”, Ministry of Health and Family Welfare Government of India, https://abdm.gov.in:8081/uploads/ndhb_1_56ec695bc8.pdf
[4] “National Digital Health Blueprint”
[5] “Mondaq” “DISHA – India's Probable Response To The Law On Protection Of Digital Health Data” accessed 13 June 2023,https://www.mondaq.com/india/healthcare/1059266/disha-india39s-probable-response-to-the-law-on-protection-of-digital-health-data
[6]“The Digital Personal Data Protection Bill 2022”, accessed 13 June 2023 , https://www.meity.gov.in/writereaddata/files/The%20Digital%20Personal%20Data%20Potection%20Bill%2C%202022_0.pdf
[7]The Digital Personal Data Protection Bill 2022
[8] Regulation (EU) 2016/679 defines health data as “Personal data concerning health should include all data pertaining to the health status of a data subject which reveal information relating to the past, current or future physical or mental health status of the data subject. This includes information about the natural person collected in the course of the registration for, or the provision of, health care services as referred to in Directive 2011/24/EU of the European Parliament and of the Council (1) to that natural person; a number, symbol or particular assigned to a natural person to uniquely identify the natural person for health purposes; information derived from the testing or examination of a body part or bodily substance, including from genetic data and biological samples; and any information on, for example, a disease, disability, disease risk, medical history, clinical treatment or the physiological or biomedical state of the data subject independent of its source, for example from a physician or other health professional, a hospital, a medical device or an in vitro diagnostic test.
[9] For creating an integrated, uniform and interoperable ecosystem in a patient or individual centric manner, all the government healthcare facilities and programs, in a gradual/phased manner, should start assigning the same number for providing any benefit to individuals.
[10] For example a manufacturer of a fitness tracker which is capable of monitoring heart rate could state that the intended purpose of the device was fitness or wellness as opposed to early detection of heart disease thereby not falling under the purview of the regulation.
[11]“Healthcare Executive” “GOQii Launches GOQii Smart Vital 2.0, an ECG-Enabled Smart Watch with Integrated Outcome based Health Insurance & Life Insurance, accessed 13 June 2023
https://www.healthcareexecutive.in/blog/ecg-enabled-smart-watch
[12] The guidelines only state that the Committee will be responsible for ensuring the compliance of the guidelines in relation to the personal data under its control. And does not go into details of defining the Committee.
Deceptive Design in Voice Interfaces: Impact on Inclusivity, Accessibility, and Privacy
The original blog post can be accessed here.
Introduction
Voice Interfaces (VIs) have come a long way in recent years and are easily available as inbuilt technology with smartphones, downloadable applications, or standalone devices. In line with growing mobile and internet connectivity, there is now an increasing interest in India in internet-based multilingual VIs which have the potential to enable people to access services that were earlier restricted by language (primarily English) and interface (text-based systems). This current interest has seen even global voice applications such as Google Home and Amazon’s Alexa being available in Hindi (Singal, 2019) as well as the growth of multilingual voice bots for certain banks, hotels, and hospitals (Mohandas, 2022).
The design of VIs can have a significant impact on the behavior of the people using them. Deceptive design patterns or design practices that trick people into taking actions they might otherwise not take (Tech Policy Design Lab, n.d.), have gradually become pervasive in most digital products and services. Their use in visual interfaces has been widely criticized by researchers (Narayanan, Mathur, Chetty, and Kshirsagar, 2020), along with recent policy interventions (Schroeder and Lützow-Holm Myrstad, 2022) as well. As VIs become more relevant and mainstream, it is critical to anticipate and address the use of deceptive design patterns in them. This article, based on our learnings from the study of VIs in India, examines the various types of deceptive design patterns in VIs and focuses on their implications in terms of linguistic barriers, accessibility, and privacy.
Potential deceptive design patterns in VIs
Our research findings suggest that VIs in India are still a long way off from being inclusive, accessible and privacy-preserving. While there has been some development in multilingual VIs in India, their compatibility has been limited to a few Indian languages (Mohandas, 2022) (Naidu, 2022)., The potential of VIs as a tool for people with vision loss and certain cognitive disabilities such as dyslexia is widely recognized (Pradhan, Mehta, and Findlater, 2018), but our conversations suggest that most developers and designers do not consider accessibility when conceptualizing a voice-based product, which leads to interfaces that do not understand non standard speech patterns, or have only text-based privacy policies (Mohandas, 2022). Inaccessible privacy policies full of legal jargon along with the lack of regulations specific to VIs, also make people vulnerable to privacy risks.
Deceptive design patterns can be used by companies to further these gaps in VIs. As with visual interfaces, the affordances and attributes of VI can determine the way in which they can be used to manipulate behavior. Kentrell Owens, et.al in their recent research lay down six unique properties of VIs that may be used to implement deceptive design patterns (Owens, Gunawan, Choffnes, Emami-Naeini, Kohno, and Roesner, 2022). Expanding upon these properties, and drawing from our research, we look at how they can be exacerbated in India.
Making processes cumbersome
VIs are often limited by their inability to share large amounts of information through voice. They thus operate in combination with a smartphone app or a website. This can be intentionally used by platforms to make processes such as changing privacy settings or accessing the full privacy notice inconvenient for people to carry out. In India, this is experienced while unsubscribing from services such as Amazon Prime (Owens et al., 2022). Amazon Echo Dot presently allows individuals to subscribe to an Amazon Prime membership using a voice command, but directs them to use the website in order to unsubscribe from the membership. This can also manifest in the form of canceling orders and changing privacy settings.
VIs follow a predetermined linear structure that ensures a tightly controlled interaction. People make decisions based on the information they are provided with at various steps. Changing their decision or switching contexts could involve going back several steps. People may accept undesirable actions from the VI in order to avoid this added effort (Owens et al., 2022). The urgency to make decisions on each step can also cause people to make unfavorable choices such as allowing consent to third party apps. The VI may prompt advertisements and push for the company’s preferred services in this controlled conversation structure, which the user cannot side-step. For example, while setting up the Google voice assistant on any device, it nudges people to sign into their Google account. This means the voice assistant gets access to their web and app activity and location history at this step. While the data management of Google accounts can be tweaked through the settings, it may get skipped during a linear set-up structure. Voice assistants can also push people to opt into features such as ads personalisation, default news sources, and location tracking.
Making options difficult to find
Discoverability is another challenge for VIs. This means that people might find it difficult to discover available actions or options using just voice commands. This gap can be misused by companies to trick people into making undesirable choices. For instance, while purchasing items, the VI may suggest products that have been sponsored and not share full information on other cheaper products, forcing people to choose without complete knowledge of their options. Many mobile based voice apps in India use a combination of images or icons with the voice prompts to enable discoverability of options and potential actions, which excludes people with vision loss (Naidu, 2022). These apps comprise a voice layer added to an otherwise touch-based visual platform so that people are able to understand and navigate through all available options using the visual interface, and use voice only for purposes such as searching or narrating. This means that these apps cannot be used through voice alone, making them disadvantageous for people with vision loss.
Discreet integration with third parties
VIs can use the same voice for varying contexts. In the case of Alexa, Skills, which are apps on its platform, have the same voice output and invocation phrases as its own in-built features. End users find it difficult to differentiate between an interaction with Amazon and that with Skills which are third-party applications. This can cause users to share information that they otherwise would not have with third parties (Mozilla Foundation, 2022). There are numerous Amazon Skills inHindi and people might not be aware that the developers of these Skills are not vetted by Amazon. This misunderstanding can create significant privacy or security risks if Skills are linked to contacts, banking, or social media accounts.
Lack of language inclusivity
The lack of local language support, colloquial translations, and accents can lead to individuals not receiving clear and complete information. VI’s failure to understand certain accents can also make people feel isolated (Harwell, 2018). While in India voice assistants and even voice bots are available in few Indic languages, the default initial setup, privacy policies, and terms and conditions are still in English. The translated policies also use literary language which is difficult for people to understand, and miss out on colloquial terms. This could mean that the person might have not fully understood these notices and hence not have given informed consent. Such use of unclear language and unavailability of information in Indic languages can be viewed as a deceptive design pattern.
Making certain choices more apparent
The different dimensions of voice such as volume, pitch, rate, fluency, pronunciation, articulation, and emphasis can be controlled and manipulated to implement deceptive design patterns. VIs may present the more privacy-invasive options more loudly or clearly, and the more privacy-preserving options more softly or quickly. It can use tone modulations to shame people into making a specific choice (Owens et al., 2022). For example, media streaming platforms may ask people to subscribe for a premium account to avoid ads in normal volume and mention the option to keep ads in a lower volume. Companies have also been observed to discreetly integrate product advertisements in voice assistants using tone. SKIN, a neurotargeting advertising strategy business, used a change of tone of the voice assistant to suggest a dry throat to advertise a drink (Chatellier, Delcroix, Hary, and Girard-Chanudet, 2019).
The attribution of gender, race, class, and age through stereotyping can create a persona of the VI for the user. This can extend to personality traits, such as an extroverted or an introverted, docile or aggressive character (Simone, 2020). The default use of female voices with a friendly and polite persona for voice assistants has drawn criticism for perpetuating harmful gender stereotypes (Cambre and Kulkarni, 2019). Although there is an option to change the wake word “Alexa” in Amazon’s devices, certain devices and third party apps do not work with another wake word (Ard, 2021). Further, projection of demographics can also be used to employ deceptive design patterns. For example, a VI persona that is constructed to create a perception of intelligence, reliability, and credibility can have a stronger influence on people’s decisions. Additionally, the effort to make voice assistants as human sounding as possible without letting people know they are human, could create a number of issues (X. Chen and Metz, 2019). First time users might divulge sensitive information thinking that they are interacting with a person. This becomes more ethically challenging when persons with vision loss are not able to know who they are interacting with.
Recording without notification
Owens et al speak about VIs occupying physical domains due to which they have a much wider impact as opposed to a visual interface (Owens et al., 2022). The always-on nature of virtual assistants could result in personal information of a guest being recorded without their knowledge or consent as consent is only given at the setup stage by the owner of the device or smartphone.
Making personalization more convenient through data collection
VIs are trained to adapt to the experience and expertise of the user. Virtual assistants provide personalization and the possibility to download a number of skills, save payment information, and phone contacts. In order to facilitate differentiation between multiple users on the same VI, individuals talking to the device are profiled based on their speech patterns and/or voice biometrics. This also helps in controlling or restricting content for children (Naidu, 2022). There is also tracking of commands to identify and list their intent for future use. The increase of specific and verified data can be used to provide better targeted advertisements, as well possibly be shared with law enforcement agencies in certain cases. Recently, a payment gateway company was made to share customer information to the law enforcement without their customer’s knowledge. This included not just the information about the client but also revealed sensitive personal data of the people who had used the gateway for transactions to the customer. While providing such details are not illegal and companies are meant to comply with requests from law enforcement, if more people knew of the possibility of every conversation of the house being accessible to law enforcement they would make more informed choices of what the VI records.
Reducing friction in actions desired by the platform
One of the fundamental advantages of VIs is that it can reduce several steps to perform an action using a single command. While this is helpful to people interacting with it, the feature can also be used to reduce friction from actions that the platform wants them to take. These actions could include sharing sensitive information, providing consent to further data sharing, and making purchases. An example of this can be seen where children have found it very easy to purchase items using Alexa (BILD, 2019).
Recommendations for Designers and Policymakers
Through these deceptive design patterns, VIs can obstruct and control information according to the preferences of the platform. This can result in a heightened impact on people with less experience with technology. Presently, profitability is a key driving factor for development and design of VI products. There is more importance given to data-based and technical approaches, and interfaces are often conceptualized by people with technical expertise with lack of inputs from designers at the early stages (Naidu, 2022). Designers also focus more on the usability and functionality of the interfaces by enabling personalization, but are often not as sensitive to safeguarding the rights of individuals using them. In order to tackle deceptive design, designers must work towards prioritizing ethical practice, and building in more agency and control for people who use VIs.
Many of the potential deceptive design patterns can be addressed by designing for accessibility and inclusivity in a privacy preserving manner. This includes vetting third-party apps, providing opt-outs, and clearly communicating privacy notices. Privacy implications can also be prompted by the interface at the time of taking actions. There should be clear notice mechanisms such as a prominent visual cue to alert people when a device is on and recording, along with an easy way to turn off the ‘always listening’ mode. The use of different voice outputs for third party apps can also signal to people about who they are interacting with and what information they would like to share in that context.
Training data that covers a diverse population should be built for more inclusivity. A linear and time-efficient architecture is helpful for people with cognitive disabilities. But, this linearity can be offset by adding conversational markers that let the individual know where they are in the conversation (Pearl, 2016). This could address discoverability as well, allowing people to easily switch between different steps. Speech-only interactions can also allow people with vision loss to access the interface with clarity.
A number of policy documents including the 2019 version of India’s Personal Data Protection Bill, emphasize on the need for privacy by design. But, they do not mention how deceptive design practices could be identified and avoided, or prescribe penalties for using these practices (Naidu, Sheshadri, Mohandas, and Bidare, 2020). In the case of VI particularly, there is a need to look at it as biometric data that is being collected and have related regulations in place to prevent harm to users. In terms of accessibility as well, there could be policies that require not just websites but also apps (including voice based apps) to be compliant with international accessibility guidelines , and to conduct regular audits to ensure that the apps are meeting the accessibility threshold.
Detecting Encrypted Client Hello (ECH) Blocking
This blogpost was edited by Torsha Sarkar.
The Transport Layer Security (TLS) protocol, which is widely recognised as the lock sign in a web browser’s URL bar, encrypts the contents of internet connections when an internet user visits a website so that network intermediaries (such as Internet Service Providers, Internet Exchanges, undersea cable operators, etc.) cannot view the private information being exchanged with the website.
TLS, however, suffers from a privacy issue – the protocol transmits a piece of information known as the Server Name Indication (or SNI) which contains the name of the website a user is visiting. While the purpose of TLS is to encrypt private information, the SNI remains unencrypted – leaking the names of the websites internet users visit to network intermediaries, who use this metadata to surveil internet users and censor access to certain websites. In India, two large internet service providers – Reliance Jio and Bharti Airtel – have been previously found using the SNI field to block access to websites.
Encrypted Client Hello (or ECH) is a new internet protocol that has been under development since 2018 at the Internet Engineering Task Force (IETF) and is now being tested for a small percentage of internet users before a wider rollout. It seeks to address this privacy limitation by encrypting the SNI information that leaks the names of visited websites to internet intermediaries. The ECH protocol significantly raises the bar for censors – the SNI is the last bit of unencrypted metadata in internet connections that censors can reliably use to detect which websites an internet user is visiting. After this protocol is deployed, censors will find it harder to block websites by interfering with network connections and will be forced to utilise blocking methods such as website fingerprinting and man-in-the-middle attacks that are either expensive and less accurate, or unfeasible in most cases.
We have been tracking the development of this privacy enhancement. To assist the successful deployment of the ECH protocol, we contributed a new censorship test to the Open Observatory for Network Interference (OONI) late last year. The new test attempts to connect to websites using the ECH protocol and records any interference from censors to the connection. As censors in some countries were found blocking a previous version of the protocol entirely, this test gives important early feedback to the protocol developers on whether censors are able to detect and block the protocol.
We conducted ECH tests during the first week of September 2023 from four popular Indian ISPs, namely Airtel, Atria Convergence Technologies (ACT), Reliance Jio, and Vodafone Idea, which account for around 95% of the Indian internet subscriber base. The results indicated that ECH connections to a popular website were successful and are not currently being blocked. This was the expected result, as the protocol is still under development. We will continue to monitor for interference from censors closer to the time of completion of the protocol to ensure that this privacy enhancing protocol is successfully deployed.
Digital Delivery and Data System for Farmer Income Support
Executive Summary
This study provides an in-depth analysis of two direct cash transfer schemes in India – Krushak Assistance for Livelihood and Income Augmentation (KALIA) and Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) – which aim to provide income support to farmers. The paper examines the role of data systems in the delivery and transfer of funds to the beneficiaries of these schemes, and analyses their technological framework and processes.
We find that the use of digital technologies, such as direct benefit transfer (DBT) systems, can improve the efficiency and ensure timely transfer of funds. However, we observe that the technology-only system is not designed with the last beneficiaries in mind; these people not only have no or minimal digital literacy but are also faced with a lack of technological infrastructure, including internet connectivity and access to the system that is largely digital.
Necessary processes need to be implemented and personnel on the ground enhanced in the existing system, to promptly address the grievances of farmers and other challenges.
This study critically analyses the direct cash transfer scheme and its impact on the beneficiaries. We find that despite the benefits of direct benefit transfer (DBT) systems, there have been many instances of failures, such as the exclusion of several eligible households from the database.
The study also looks at gender as one of the components shaping the impact of digitisation on beneficiaries. We also identify infrastructural and policy constraints, in sync with the technological framework adopted and implemented, that impact the implementation of digital systems for the delivery of welfare. These include a lack of reliable internet connectivity in rural areas and low digital literacy among farmers. We analyse policy frameworks at the central and state levels and find discrepancies between the discourse of these schemes and their implementation on the ground.
We conclude the study by discussing the implications of datafication, which is the process of collecting, analysing, and managing data through the lens of data justice. Datafication can play a crucial role in improving the efficiency and transparency of income support schemes for farmers. However, it is important to ensure that the interests of primary beneficiaries are considered – the system should work as an enabling, not a disabling, factor. This appears to be the case in many instances since the current system does not give primacy to the interests of farmers. We offer recommendations for policymakers and other stakeholders to strengthen these schemes and improve the welfare of farmers and end users.
DoT’s order to trace server IP addresses will lead to unintended censorship
This post was reviewed and edited by Isha Suri and Nishant Shankar.
In December 2023, the Department of Telecommunications (DoT) issued instructions to internet service providers (ISPs) to maintain and share a list of “customer owned” IP addresses that host internet services through Indian ISPs so that they can be immediately traced in case “they are required to be blocked as per orders of [the court], etc”.
For the purposes of the notification, tracing customer-owned IP addresses implies identifying the network location of a subset of web services that possess their own IP addresses, as opposed to renting them from the ISP. These web services purchase IP Transit from Indian ISPs in order to connect their servers to the internet. In such cases, it is not immediately apparent which ISP routes to a particular IP address, requiring some amount of manual tracing to locate the host and immediately cut off access to the service. The order notes that “It has been observed that many times it is time consuming to trace location of such servers specially in case the IP address of servers is customer owned and not allocated by the Licensed Internet Service Provider”.
This indicates that, not only is the DoT blocking access to web services based on their IP addresses, but is doing so often enough for manual tracing of IP addresses to be a time consuming process for them.
While our legal framework allows courts and the government to issue content takedown orders, it is well documented that blocking web services based on their IP addresses is ineffectual and disruptive. An explainer on content blocking by the Internet Society notes, “Generally, IP blocking is a poor filtering technique that is not very effective, is difficult to maintain effectively, has a high level of unintended additional blockage, and is easily evaded by publishers who move content to new servers (with new IP addresses)”. The practice of virtual hosting is very common on the internet, which entails that a single web service can span multiple IP addresses and a single IP address can be shared by hundreds, or even thousands, of web services. Blocking access to a particular IP address can cause unrelated web services to fail in subtle and unpredictable ways, leading to collateral censorship. For example, a 2022 Austrian court order to block 11 IP addresses associated with 14 websites that engaged in copyright infringement rendered thousands of unrelated websites inaccessible.
The unintended effects of IP blocking have also been observed in practice in India. In 2021, US-based OneSignal Inc. approached the Delhi High Court challenging the blockage of one of its IP addresses by ISPs in India. With OneSignal being an online marketing company, there did not appear to be any legitimate reason for it to be blocked. In response to the petition the Government said that they had already issued unblocking orders for the IP address. There have also been numerous reports by internet users of inexplicable blocking of innocuous websites hosted on content delivery networks (which are known to often share IP addresses between customers).
We urge the ISPs, government departments and courts issuing and implementing website blocking orders to refrain from utilising overly broad censorship mechanisms like IP blocking which can lead to failure of unrelated services on the internet.
Information Disorders and their Regulation
In the last few years, ‘fake news’ has garnered interest across the political spectrum, as affiliates of both the ruling party and its opposition have seemingly partaken in its proliferation. The COVID-19 pandemic added to this phenomenon, allowing for xenophobic, communal narratives, and false information about health-protective behaviour to flourish, all with potentially deadly effects. This report maps and analyses the government’s regulatory approach to information disorders in India and makes suggestions for how to respond to the issue.
In this study, we gathered information by scouring general search engines, legal databases, and crime statistics databases to cull out data on a) regulations, notifications, ordinances, judgments, tender documents, and any other legal and quasi-legal materials that have attempted to regulate ‘fake news’ in any format; and b) news reports and accounts of arrests made for allegedly spreading ‘fake news’. Analysing this data allows us to determine the flaws and scope for misuse in the existing system. It also gives us a sense of the challenges associated with regulating this increasingly complicated issue while trying to avoid the pitfalls of the present system.
Click to download the full report here.
Reconfiguring Data Governance: Insights from India and the EU
The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.
This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.
This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet & Society, India.
Click to download the report
India’s parental control directive and the need to improve stalkerware detection
This post was reviewed and edited by Amrita Sengupta.
Stalkerware is a form of surveillance targeted primarily at partners, employees and children in abusive relationships. These are software tools that enable abusers to spy on a person’s mobile device, allowing them to remotely access all data on the device, including calls, messages, photos, location history, browsing history, app data, and more. Stalkerware apps run hidden in the background without the knowledge or consent of the person being surveilled.[1] Such applications are easily available online and can be installed by anyone with little technical know-how and physical access to the device.
News reports indicate that the Ministry of Electronics and Information Technology (MeitY) is supporting the development of an app called “SafeNet”[2] that allows parents to monitor activity and set content filters on children’s devices. Following a directive from the Prime Minister’s office to “incorporate parental controls in data usage” by July 2024, the Internet Service Providers Association of India (ISPAI) has suggested that the app should come preloaded on mobile phones and personal computers sold in the country. The Department of Telecom is also asking schools to raise awareness about such parental control solutions.[3][4]
The beta version of the app is available for Android devices on the Google Play Store and advertises a range of functionalities including location access, monitoring website and app usage, call and SMS logs, screen time management and content filtering. The content filtering functionality warrants a separate analysis and this post will only focus on the surveillance capabilities of this app.
Applications like Safenet, that do not attempt to hide themselves and claim to operate with the knowledge of the person being surveilled, are sometimes referred to as “watchware”.[5] However, for all practical purposes, these apps are indistinguishable from stalkerware. They possess the same surveillance capabilities and can be deployed in the exact same ways. Such apps sometimes incorporate safeguards to notify users that their device is being monitored. These include persistent notifications on the device’s status bar or a visible app icon on the device’s home screen. However, such safeguards can be circumvented with little effort. The notifications can simply be turned off on some devices and there are third-party Android tools that allow app icons and notifications to be hidden from the device user, allowing watchware to be repurposed as stalkerware and operate secretly on a device. This leaves very little room for distinction between stalkerware and watchware apps.[6] In fact, the developers of stalkerware apps often advertise their tools as watchware, instructing users to only use them for legitimate purposes.
Even in cases where stalkerware applications are used in line with their stated purpose of monitoring minors’ internet usage, the effectiveness of a surveillance-centric approach is suspect. Our previous work on children’s privacy has questioned the treatment of all minors under the age of 18 as a homogenous group, arguing for a distinction between the internet usage of a 5-year-old child and a 17-year-old teenager. We argue that educating and empowering children to identify and report online harms is more effective than attempts to surveil them.[7][8] Most smartphones already come with options to enact parental controls on screen time and application usage[9][10], and the need for third-party applications with surveillance capabilities is not justified.
Studies and news reports show the increasing role of technology in intimate partner violence (IPV).[11][12] Interviews with IPV survivors and support professionals indicate an interplay of socio-technical factors, showing that abusers leverage the intimate nature of such relationships to gain access to accounts and devices to exert control over the victim. They also indicate the prevalence of “dual-use” apps such as child-monitoring and anti-theft apps that are repurposed by abusers to track victims.[13]
There is some data available that indicates the use of stalkerware apps in India. Kaspersky anti-virus’ annual State of Stalkerware reports consistently place India among the top 4 countries with the most number of infections detected by its product, with a few thousand infections reported each year between 2020 and 2023.[14][15][16[17] TechCrunch’s Spyware Lookup Tool, which compiles information from data leaks from more than nine stalkerware apps to notify victims, also identifies India as a hotspot for infections.[18] Avast, another antivirus provider, reported a 20% rise in the use of stalkerware apps during COVID-19 lockdowns.[19] The high rates of incidence of intimate partner violence in India, with the National Family Health Survey reporting that about a third of all married women aged 18–49 years have experienced spousal violence [20], also increases the risk of digitally-mediated abuse.
Survivors of digitally-mediated abuse often require specialised support in handling such cases to avoid alerting abusers and potential escalations. As part of our ongoing work on countering digital surveillance, we conducted an analysis of seven stalkerware applications, including two that are based in India, to understand and improve how survivors and support professionals can detect their presence on devices.
In some cases, where it is safe to operate the device, antivirus solutions can be of use. Antivirus tools can often identify the presence of stalkerware and watchware on a device, categorising them as a type of malware. We measured how effective various commercial antivirus solutions are at detecting stalkerware applications. Our results, which are detailed in the Appendix, indicate a reasonably good coverage, with six out of the seven apps being flagged as malicious by various antivirus solutions. We found that Safenet, the newest app on the list, was not detected by any antivirus. We also compared the detection results with a similar study conducted in 2019 [21] and found that some newer versions of previously known apps saw lower rates of detection. This indicates that antivirus solutions need to analyse new apps and newer versions of apps more frequently to improve coverage and understand how they are able to evade detection.
In cases where the device cannot be operated safely, support workers use specialised forensic tools such as the Mobile Verification Toolkit [22] and Tinycheck [23], which can be used to analyse devices without modifying them. We conducted malware analysis on the stalkerware apps to document the traces they leave on devices and submitted them to an online repository of indicators of compromise (IOCs).[24] These indicators are incorporated in detection tools used by experts to detect stalkerware infections.
Despite efforts to support survivors and stop the spread of stalkerware applications, the use of technology in abusive relationships continues to grow.[25] Making a surveillance tool like Safenet available for free, publicising it for widespread use, and potentially preloading it on mobile devices and personal computers sold in the country, is an ill-conceived way to enact parental controls and will lead to an increase in digitally-mediated abuse. The government should immediately take this application out of the public domain and work on developing alternate child protection policies that are not rooted in distrust and surveillance.
If you are affected by stalkerware there are some resources available here:
https://stopstalkerware.org/information-for-survivors/
https://stopstalkerware.org/resources/
Appendix
Our analysis covered two apps based in India, SafeNet and OneMonitar, and five other apps, Hoverwatch, TheTruthSpy, Cerberus, mSpy and FlexiSPY. All samples were directly obtained from the developer’s websites. The details of the samples are as follows:
Name |
File name |
Version |
Date sample was obtained |
SHA-1 Hash |
SafeNet |
Safenet_Child.apk |
0.15 |
16th March, 2024 |
d97a19dc2212112353ebd84299d49ccfe8869454 |
OneMonitar |
ss-kids.apk |
5.1.9 |
19th March, 2024 |
519e68ab75cd77ffb95d905c2fe0447af0c05bb2 |
Hoverwatch |
setup-p9a8.apk |
7.4.360 |
5th March, 2024 |
50bae562553d990ce3c364dc1ecf44b44f6af633 |
TheTruthSpy |
TheTruthSpy.apk |
23.24 |
5th March, 2024 |
8867ac8e2bce3223323f38bd889e468be7740eab |
Cerberus |
Cerberus_disguised.apk |
3.7.9 |
4th March, 2024 |
75ff89327503374358f8ea146cfa9054db09b7cb |
mSpy |
bt.apk |
7.6.0.1 |
21st March, 2024 |
f01f8964242f328e0bb507508015a379dba84c07 |
FlexiSPY |
5009_5.2.2_1361.apk |
5.2.2 |
26th March, 2024 |
5092ece94efdc2f76857101fe9f47ac855fb7a34 |
We analysed the network activity of these apps to check what web servers they send their data to. With increasing popularity of Content Delivery Networks (CDNs) and cloud infrastructure, these results may not always give us an accurate idea about where these apps originate, but can sometimes offer useful information:
Name | Domain | IP Address[26] | Country | ASN Name and Number |
SafeNet | safenet.family | 103.10.24.124 | India | Amrita Vishwa Vidyapeetham, AS58703 |
OneMonitar | onemonitar.com | 3.15.113.141 | United States | Amazon.com, Inc., AS16509 |
OneMonitar | api.cp.onemonitar.com | 3.23.25.254 | United States | Amazon.com, Inc., AS16509 |
Hoverwatch | hoverwatch.com | 104.236.73.120 | United States | DigitalOcean, LLC, AS14061 |
Hoverwatch | a.syncvch.com | 158.69.24.236 | Canada | OVH SAS, AS16276 |
TheTruthSpy | thetruthspy.com | 172.67.174.162 | United States | Cloudflare, Inc., AS13335 |
TheTruthSpy | protocol-a946.thetruthspy.com | 176.123.5.22 | Moldova | ALEXHOST SRL, AS200019 |
Cerberus | cerberusapp.com | 104.26.9.137 | United States | Cloudflare, Inc., AS13335 |
mSpy | mspy.com | 104.22.76.136 | United States | Cloudflare, Inc., AS13335 |
mSpy | mobile-gw.thd.cc | 104.26.4.141 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | flexispy.com | 104.26.9.173 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | djp.bz | 119.8.35.235 | Hong Kong | HUAWEI CLOUDS, AS136907 |
To understand whether commercial antivirus solutions are able to categorise stalkerware apps as malicious, we used a tool called VirusTotal, which aggregates checks from over 70 antivirus scanners.[27] We uploaded hashes (i.e. unique signatures) of each sample to VirusTotal and recorded the total number of detections by various antivirus solutions. We compared our results to a similar study by Citizen Lab in 2019 [28] that looked at a similar set of apps to identify changes in detection rates over time.
Product |
VirusTotal Detections (March 2024) |
VirusTotal Detections (January 2019) (By Citizen Lab) |
SafeNet [29] |
0/67 (0 %) |
N/A |
OneMonitar [30] |
17/65 (26.1%) |
N/A |
Hoverwatch |
24/58 (41.4%) |
22/59 (37.3%) |
TheTruthSpy |
38/66 (57.6%) |
0 |
Cerberus |
8/62 (12.9%) |
6/63 (9.5%) |
mSpy |
8/63 (12.7%) |
20/63 (31.7%) |
Flexispy [31] |
18/66 (27.3%) |
34/63 (54.0%) |
We also checked if Google’s Play Protect service [32], a malware detection tool that is built-in to Android devices using Google’s Play Store. These results were also compared with similar checks performed by Citizen Lab in 2019.
Product |
Detected by Play Protect (March 2024) |
Detected by Play Protect (January 2019) (By Citizen Lab) |
SafeNet |
no |
N/A |
OneMonitar |
yes |
N/A |
Hoverwatch |
yes |
yes |
TheTruthSpy |
yes |
yes |
Cerberus |
yes |
no |
mSpy |
yes |
yes |
Flexispy |
yes |
yes |
Endnotes
1. Definition adapted from Coalition Against Stalkerware, https://stopstalkerware.org/
2. https://web.archive.org/web/20240316060649/https://safenet.family/
5. https://github.com/AssoEchap/stalkerware-indicators/blob/master/README.md
6. https://cybernews.com/privacy/difference-between-parenting-apps-and-stalkerware/
7. https://timesofindia.indiatimes.com/blogs/voices/shepherding-children-in-the-digital-age/
8. https://blog.avast.com/stalkerware-and-children-avast
9. https://safety.google/families/parental-supervision/
10. https://support.apple.com/en-in/105121
11. R. Chatterjee et al., "The Spyware Used in Intimate Partner Violence," 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 441-458.
13. D. Freed et al., "Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders", PACM: Human-Computer Interaction: Computer-Supported Cooperative Work and Social Computing (CSCW), vol. 1, no. 2, 2017.
18. https://techcrunch.com/pages/thetruthspy-investigation/
19. https://www.thenewsminute.com/atom/avast-finds-20-rise-use-spying-and-stalkerware-apps-india-during-lockdown-129155
20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10071919/
21. https://citizenlab.ca/docs/stalkerware-holistic.pdf
22. https://docs.mvt.re/en/latest/
23. https://tiny-check.com/
24. https://github.com/AssoEchap/stalkerware-indicators/pull/125
25. https://stopstalkerware.org/2023/05/15/report-shows-stalkerware-is-not-declining/
26. IP information provided by https://ipinfo.io/
27. https://docs.virustotal.com/docs/how-it-works
28. https://citizenlab.ca/docs/stalkerware-holistic.pdf
29. Sample was not known to VirusTotal, it was uploaded at the time of analysis
30. Sample was not known to VirusTotal, it was uploaded at the time of analysis
31. Sample was not known to VirusTotal, it was uploaded at the time of analysis
Consultation on Gendered Information Disorder in India
The event was convened by Amrita Sengupta (Research and Programme Lead, CIS), Yesha Tshering Paul (Researcher, CIS), Bishakha Datta (Programme Lead, POV) and Prarthana Mitra (Project Anchor, POV)..* Download the event report here.
The event brought together experts, researchers and grassroots activists from Maharashtra and across the country to discuss their experiences with information disorder, and the multifaceted challenges posed by misinformation, disinformation and malinformation targeting gender and sexual identities.
Understanding Information Disorders: The consultation commenced with a look at the wide spectrum of information disorder by Yesha Tshering Paul and Amrita Sengupta. Misinformation[1] was highlighted as false information disseminated unintentionally, such as inaccurate COVID cures that spread rapidly during the pandemic. In contrast, disinformation involves the intentional spread of false information to cause harm, exemplified by instances like deepfake pornography. A less recognized form, malinformation, involves the deliberate misuse of accurate information to cause harm, as seen in the misleading representation of regret rates among trans individuals who have undertaken gender affirming procedures. Yesha highlighted that the definitions of these concepts are often varied, and thus the importance of moving beyond definitions to centre user experiences of this phenomenon.
The central theme of this discussion was the concept of “gendered” information disorder, referring to the targeted dissemination of false or harmful online content based on gender and sexual identity. This form of digital misogyny intersects with other societal marginalizations, disproportionately affecting marginalised genders and sexualities. The session also emphasised the critical link between information disorders and gendered violence (both online and in real life). Such disorders perpetuate stereotypes, gender-based violence, and silences victims, fostering an environment that empowers perpetrators and undermines victims' experiences.
Feminist Digital Infrastructure: Digital infrastructures shape our online spaces. Sneha PP (Senior Researcher, CIS) introduced the concept of feminist infrastructures as a potential solution that helps mediate discourse around gender, sexuality, and feminism in the digital realm. Participant discussions emphasised the need for accessible, inclusive, and design-conscious digital infrastructures that consider the intersectionality and systemic inequalities impacting content creation and dissemination. Strategies were discussed to address online gender-based violence and misinformation, focusing on survivor-centric approaches and leveraging technology for storytelling.
Gendered Financial Mis-/Dis-information: Garima Agrawal (Researcher, CIS) with inputs by Debarati Das (Co-Lead, Capacity Building at PoV) and Chhaya Rajput (Helpline Facilitator, Tech Sakhi) led the session by highlighting gender disparities in digital and financial literacy and access to digital devices and financial services in India, despite women constituting a higher percentage of new internet users. This makes marginalised users more vulnerable to financial scams. Drawing from the ongoing financial harms project at CIS, Garima spoke about the diverse manifestations of financial information disorders arising from misleading information that results in financial harm, ranging from financial influencers (and in some cases deepfakes of celebrities) endorsing platforms they do not use, to fake or unregulated loan and investment services deceiving users. Breakout groups of participants then analysed several case studies of real-life financial frauds that targeted women and the queer community to identify instances of misinformation, disinformation and malinformation. Emotional manipulation and the exploitation of trust were identified as key tactics used to deceive victims, with repercussions extending beyond monetary loss to emotional, verbal, and even sexual violence against these individuals.
Fact-Checking Fake News and Stories: The pervasive issue of fake news in India was discussed in depth, especially in the era of widespread social media usage. Only 41% of Indians trust the veracity of the information encountered online. Aishwarya Varma, who works at Webqoof (The Quint’s fact checking initiative) as a Fact Check Correspondent, led an informative session detailing the various accessible tools that can be used to fact-check and debunk false information. Participants engaged in hands-on activities by using their smartphones for reverse image searches, emphasising the importance of verifying images and their sources. Archiving was identified as another crucial aspect to preserve accurate information and debunk misinformation.
Gendered Health Mis-/Dis-information: This participant-led discussion highlighted structural gender biases in healthcare and limited knowledge about mental health and menstrual health as significant concerns, along with the discrimination and social stigma faced by the LGBTQ+ community in healthcare facilities. One participant brought up their difficulty accessing sensitive and non-judgmental healthcare, and the insensitivity and mockery faced by them and other trans individuals in healthcare facilities. Participants suggested the increased need for government-funded campaigns on sexual and reproductive health rights and menstrual health, and the importance of involving marginalised communities in healthcare related decision-making to bring about meaningful change.
Mis-/Dis-information around Sex, Sexuality, and Sexual Orientation: Paromita Vohra, Founder and Creative Director of Agents of Ishq—a multi-media project about sex, love and desire that uses various artistic mediums to create informational material and an inclusive, positive space for different expressions of sex and sexuality—led this session. She started with an examination of the term “disorder” and its historical implications, and highlighted how religion, law, medicine, and psychiatry had previously led to the classification of homosexuality as a “disorder”. The session delved into the misconceptions surrounding sex and sexuality in India, advocating for a broader understanding that goes beyond colonial knowledge systems and standardised sex education. She brought up the role of media in altering perspectives on factual events, and the need for more initiatives like Agents of Ishq to address the need for culturally sensitive and inclusive sexuality language and education that considers diverse experiences, emotions, and identities.
Artificial Intelligence and Mis-/Dis-information: Padmini Ray Murray, Founder of Design Beku—a collective that emerged from a desire to explore how technology and design can be decolonial, local, and ethical— talked about the role of AI in amplifying information disorder and its ethical considerations, stemming from its biases in language representation and content generation. Hindi and regional Indian languages remain significantly under-represented in comparison to English content, leading to skewed AI-generated content. Search results reflect the gendered biases in AI and further perpetuate existing stereotypes and reinforce societal biases. She highlighted the real-world impacts of AI on critical decision-making processes such as loan approvals, and the influence of AI on public opinion via media and social platforms. Participants expressed concerns about the ethical considerations of AI, and emphasised the need for responsible AI development, clear policies, and collaborative efforts between tech experts, policymakers, and the public.
* The Centre for Internet and Society undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. Point of View focuses on sexuality, disability and technology to empower women and other marginalised genders to shape and inhabit digital spaces.
[1] Claire Wardle, Understanding Information Disorder (2020). https://firstdraftnews.org/long-form-article/understanding-information-disorder/.
Comments to the Draft Digital Competition Bill, 2024
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
At the outset, CIS affirms the Committee’s approach to transition from a predominantly ex-post to an ex-ante approach for regulating competition in digital markets. The Committee’s assessment of the ex-post regime being too time-consuming for the digital domain has been substantiated by frequent and expensive delays in antitrust disputes, a fact that has also recently drawn the attention of the Ministry of Corporate Affairs. And not just in India, the ex-post regime has been found to be too time-consuming in other jurisdictions as well, as a consequence of which many other countries are also moving towards an ex-post regime for digital markets. This also allows India to be in harmony with both developing and developed countries, which makes regulating global competition more consistent and efficient. In fact, “international cooperation between competition authorities” and “greater coherence between regulatory frameworks” are key in facilitating global investigations and lowering the cost of doing business.
Moreover, by adopting a principles-based approach to designing the law’s obligations, the draft Bill also addresses the concern that ex-ante regulations, due to their prescriptive nature, tend to be sector-agnostic. The fact that these principles are based on the findings of the Parliamentary Standing Committee’s (PSC) Report on ‘Anti-Competitive Practices by Big Tech Companies’ only lends them more evidence. The draft DCB empowers the Commission to clarify the Obligations for different services, and also provides CCI with the flexibility to undertake independent consultations to accommodate varying contexts and the needs of different core digital services. We do, however, have specific comments regarding implementing some of these provisions, which are elaborated in the accompanying document.
We would also like to emphasise that adequate enforcement of an ex-ante approach requires bolstering and strengthening regulatory capacity. Therefore, to minimise risks relating to underenforcement as well as overenforcement, CCI, its Digital Markets and Data Unit (DMDU), and the Director General’s (DG) office will have to substantially increase their technical capacity. A comparison of CCI’s current strength with its global counterparts that have adopted or are in the process of adopting an ex-ante approach to competition regulation reveals a stark picture. For example, the European Union (EU) had over 870 people in its DG COMP unit in 2022, and its DG CONNECT unit is expected to hire another 100 people in 2024 alone. Similarly, the United Kingdom’s Competition and Markets Authority (CMA) has a permanent staff of 800+, the Japan Fair Trade Commission (JTFC) has about 400 officials just for regulating anti-competitive conduct, and South Korea’s KFTC has about 600 employees. In contrast, CCI and DG, combined, have a sanctioned strength of only 195 posts, out of which 71 remain vacant. Bridging this capacity gap through frequent and high-quality recruitment is, therefore, the need of the hour. Most importantly, there is a need to create a culture of interdisciplinary coordination among legal, technical, and economic domains.
Moreover, as we come to rely on an increasingly digitised economy, most technology companies will work with critical technology components such as key infrastructure, algorithms, and Artificial Intelligence to business models that are based on data collection and processing practices. Consequently, there will be a need to bolster CCI’s capacity in the technical domain by hiring and integrating new roles including technologists, software and hardware engineers, product managers, UX designers, data scientists, investigative researchers, and subject matter experts dealing with new and emerging areas of technology.21 Therefore, we recommend CCI to ensure that the proposed DMDU has the requisite diversity of skills to effectively use existing tools for enforcement and is also able to keep pace with new and emerging technological developments.
Along with this overall observation of CCI's capacity, we have also submitted detailed comments on specific clauses of the draft DCB. These submissions are structured across the following six categories: i) Classification of Core Digital Services; ii) Designation of a Systemically Significant Digital Enterprise (SSDE) and Associate Digital Enterprise (ADE); iii) Obligations on SSDEs and ADEs; iv) Powers of the Commission to Conduct an Inquiry; v) Penalties and Appeals; and vi) Powers of the Central Government. In addition to these suggestions, the detailed comments and their summarised version focus on three important gaps in the draft DCB – limited representation from workers’ groups and MSMEs, exclusion of merger and acquisition (M&A) from the discussions, and lack of a formalised framework for interregulatory coordination.
For our full comments, click here
For a detailed summary of our comments, click here
A Guide to Navigating Your Digital Rights
The Digital Rights Guide gives practical guidance on the laws and procedures that affect internet freedoms. It covers the following topics:
- Internet Shutdowns
- Content Takedown
- Surveillance
- Device Seizure
The Digital Rights Guide can be viewed here.
Legal Advocacy Manual
Click to download the manual.
Draft Circular on Digital Lending – Transparency in Aggregation of Loan Products from Multiple Lenders
Edited and reviewed by Amrita Sengupta
The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on the internet and digital technologies from policy and academic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and practices around the internet, technology and society in India, and elsewhere.
CIS is grateful for the opportunity to submit comments on the “Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders” to the Reserve Bank of India. Over the last twelve years, CIS has worked extensively on research around privacy, online safety, cross border flows of data, security, and innovation. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem.
Introduction
The draft circular on ‘Transparency in Aggregation of Loan Products from Multiple Lenders’ is a much needed and timely document that builds on the Guidelines on Digital Lending. Both documents have maintained the principles of customer centricity and transparency at their core. Reducing information asymmetry and deceptive patterns in the digital lending ecosystem is of utmost importance, given the adverse effects experienced by borrowers. Digital lending is one of the fastest-growing fintech segments in India,[1] having grown exponentially from nine billion U.S. dollars in 2012 to nearly 150 billion dollars by 2020, and is estimated to reach 515 billion USD by 2030.[2] At the same time, accessing digital credit through digital lending applications has been found to be associated with a high risk to financial and psychological health due to a host of practices that lead to overindebtedness.[3] These include post contract exploitation through hidden transaction fees, abusive debt collection practices, privacy violations and fluctuations in interest rates. Both illegal/fraudulent and licensed lending service providers have been employing aggressive marketing and debt collection tactics[4] that exacerbate the risks of all the above harms.[5] With additional safeguards in place, the guidelines can provide a suitable framework to ensure borrowers have the opportunity and information needed to make an informed decision while accessing intermediated credit, and reduce harmful financial and health related consequences.
In this submission, we seek to provide some comments on the broader issues the guidelines address. Our comments recommend additional safeguards, keeping in mind the gamut of services provided by lending service providers (LSPs). We will frame our comment around two main concerns addressed by the draft guidelines: 1) reducing information asymmetry 2) market fairness. In addition to this we will share comments around a third concern that requires additional scrutiny, i.e. 3) data privacy and security.
Reducing Information Asymmetry
The guidelines aim to define responsibilities of LSPs in maintaining transparency to ensure borrowers are aware of the identity of the regulated entity (RE) providing the loan, and make informed decisions based on consistent information to weigh their options.
Comments: Guideline iii suggests that the digital view should include information that helps the borrower to compare various loan offers. This includes “the name(s) of the regulated entity (RE) extending the loan offer, amount and tenor of loan, the Annual Percentage Rate (APR) and other key terms and conditions” alongside a link to the key facts statement (KFS). The earlier ‘Guidelines on Digital Lending’ specifies that APR should be an all-inclusive cost including margin, credit costs, operating costs, verification charges, processing fees etc. only excluding penalties, and late payment charges.
Recommendations: All users of digital lending services may not be aware that APR is inclusive of all non-contingent charges. Requiring digital loan aggregators to provide messages/notifications boosting consumer awareness of regulations and their rights can help reduce violations. We also recommend that this information is made available in various languages such that a wide range of users are able to access this information. Further we recommend that accountability be laid on the LSPs to adhere to an inclusive platform design that allows for easy access to this information.
Market Fairness
Guidelines ii-iv also serve to outline practices to curb anti-competitive placement of digital loan products through regulating use of dark patterns and increasing transparency.
Comments: Section ii mandates that LSPs must disclose the approach utilised to determine the willingness of lenders to offer a loan. Whether this estimation includes factors associated with the customer profile like age, income and occupation etc. should be clearly disclosed as well.
Recommendations: Alongside the predictive estimate of the lender’s willingness, to improve transparency loan aggregators may be asked to share an overall rate of rejection or approval as well within the digital view.
While the ‘Guidelines on Digital Lending’[6] clearly state that LSPs must charge any fees from the REs and not the borrowers, further clarification should be provided on whether LSPs can charge fees for loan aggregation services themselves, i.e. for providing information of available loan products.
Privacy and Data Security
The earlier ‘Guidelines on Digital Lending’[7] require LSPs to only store minimal contact data regarding the customer and provide consumers the ability to seek their data being removed, i.e. the right to be forgotten by the provider, once they are no longer seeking their services. Personal financial information is not to be stored by LSPs. It is the responsibility of REs to ensure that LSPs do not store extraneous customer data, and to stipulate clear policy guidelines regarding the storage and use of customer data.
Comments: It is important to ascertain the nature of anonymised and personally identifiable customer data that may be currently utilised by LSPs or processed on their platforms, in the course of providing a range of services within the digital credit ecosystem to borrowers and lenders.
Certain functions that loan aggregators perform may expand their role beyond a simple intermediary. LSPs also provide services assessing borrower’s creditworthiness, payment services, and agent-led debt collection services for lenders. Some LSPs may be involved in more than one stage of the loan process which may make them privy to additional personal information about a borrower. There may be cases in which a consumer registers on an LSP’s platform without going ahead with any loan applications. It is unclear who is responsible for maintaining data security and privacy or providing grievance redressal at these times.
Section ii allows them to provide estimates of lenders’ willingness to borrowers. Some LSPs connecting REs with borrowers may also provide services using alternative and even non-financial data to assess the creditworthiness of thin-file credit seekers. Whether there are any restrictions on the use of AI tools in these processes, and the handling of customer data should also be clarified or limited. The right to be forgotten may be difficult to enforce with the use of certain machine learning and other artificial intelligence models. As innovation in credit scoring mechanisms continues, it is also important to bring such financial service providers under the ambit of guidelines for digital lending platforms.
Recommendations: The burden of maintaining privacy and data security should fall on aggregators of loan products in addition to regulated entities as well. Include guidelines limiting the use of PII (and PFI if applicable) for purposes other than connecting borrowers to a loan provider without consumer consent. Informed and explicit consumer consent should be sought for any additional purposes like marketing, market research, product development, cross-selling, delivery of other financial and commercial services, including providing access to other loan products in the future.
Often consumers are required to register on a platform by providing contact details and other personal information. An initial digital view of loan products available could be displayed for all users without registering to help borrowers determine whether they would like to register for the LSP’s services. This can help reduce the amount of consumer contact information and other personally identifiable information (PII) that is collected by LSPs.
Emerging Risks
Emerging consumer risks within the digital lending ecosystem expose borrowers to additional risks like over-indebtedness and risks arising from fraud, data misuse, lack of transparency and inadequate redress mechanisms.[8] These draft guidelines clearly layout mechanisms to reduce risks arising from lack of transparency. Similar efforts need to be put behind reduction of data misuse by delimiting the time period and – and the risk for overindebtedness
One of the biggest sources of consumer risk has been at the debt recovery stage. Aggressive debt collection practices have had deleterious effects on consumers’ mental health, social standing and even lead some to consider suicide. Extant guidelines assume a recovery agent will be contacting the consumer.[9] LSPs may also set up automated payments and use digital communication like app notifications, messages and automated calls in the debt recovery process as well. The impact of repeated notifications and automated debt payments also needs to be considered in future iterations of guidelines addressing risk in the digital lending ecosystem.
[1] “Funding distribution of FinTech companies in India in second quarter of 2023, by segment”, Statista, accessed 30 May 2024, https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/
[2] Anushka Sengupta, “India’s digital lending market likely to grow $515 bn by 2030: Report”, Economic Times, 17 June 2023, https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337
[3] “Mobile Instant Credit: Impacts, Challenges, and Lessons for Consumer Protection”, Center for Effective Global Action, September 2023, https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf
[4] Jinit Parmar, “Ruthless Recovery Agents, Aggressive Loan Outreach Put the Spotlight on Bajaj Finance”, Moneycontrol, 18 April 2023, https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html
[5] Prudhviraj Rupavath, “Suicide Deaths Mount after Unregulated Lending Apps Resort to Exploitative Recovery Practices”, Newsclick, 26 December 2020 https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices
Priti Gupta and Ben Morris, “India's loan scams leave victims scared for their lives”, BBC, 7 June 2022, https://www.bbc.com/news/business-61564038
[6] Section 4.1, Guidelines on Digital Lending, 2022.
[7] Section 11, Guidelines on Digital Lending, 2022.
[8] “The Evolution of the Nature and Scale of DFS Consumer Risks: A Review of Evidence”, CGAP, February 2022, https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf
[9] Section 2, Outsourcing of Financial Services - Responsibilities of regulated entities employing Recovery Agents, 2022.
Online Censorship: Perspectives From Content Creators and Comparative Law on Section 69A of the Information Technology Act
This paper was reviewed by Krishnesh Bapat and Torsha Sarkar.
Abstract: The Government of India has increasingly engaged in online censorship using powers in the Information Technology Act. The law lays out a procedure for online censorship that relies solely on the discretion of the executive. Using a constitutional and comparative legal analysis, we contend that the law has little to no oversight and lacks adequate due process for targets of censorship. Through semi-structured interviews with individuals whose content has been taken down by such orders, we shed light on experiences of content owners with government-authorised online censorship. We show that legal concerns about the lack of due process are confirmed empirically, and content owners are rarely afforded an opportunity for a hearing before they are censored. The law enabling online censorship (and its implementation) may be considered unconstitutional in how it inhibits avenues of remedy for targets of censorship or for the general public. We also show that online content blocking has far-reaching, chilling effects on the freedom of expression.
The paper is available on SSRN, and can also be downloaded here.
AI for Healthcare: Understanding Data Supply Chain and Auditability in India
Read our full report here.
The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.
For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.
Our primary research questions are as follows:
-
What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?
-
What auditing practices, if any, are being followed by technology companies and healthcare institutions?
-
What best practices can organisations based in India adopt to improve AI auditability?
This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:
-
Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance on datasets from the Global North.
-
Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.
-
Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.
-
There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.
-
Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.
-
The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.
Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:
-
Improve data management across the AI data supply chain
Adopt standardised data-sharing policies. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare.
Emphasise not just data quantity but also data quality. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.
-
Streamline AI auditing as a form of governance
Standardise the practice of AI auditing. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.
Build organisational knowledge and inter-stakeholder collaboration. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.
Prioritise transparency and public accountability in auditing standards. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.
-
Centre public good in India’s AI industrial policy
Adopt focused and transparent approaches to investing in and financing AI projects. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.
Strengthen regulatory checks and balances for AI governance.
While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.
Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper
Read the full paper here.
Political participation of women is fundamental to democratic processes and promotes building of more equitable and just futures. Rapid adoption of technology has created avenues for women to access the virtual public sphere, where they may have traditionally struggled to access the physical public spaces, due to patriarchal norms and violence in the physical sphere. While technology has provided tools for political participation, information seeking, and mobilization, it has also created unsafe online spaces for women, thus often limiting their ability to actively engage online.
This essay examines the emotional and technological underpinnings of gender-based violence faced by women in politics. It further explores how gender-based violence is weaponised to diminish the political participation and influence of women in the public eye. Through real-life examples of gendered disinformation and sexist hate speech targeting women in politics in India, we identify affective patterns in the strategies deployed to adversely impact public opinion and democratic processes. We highlight the emotional triggers that play a role in exacerbating online gendered harms, particularly for women in public life. We also examine the critical role of technology and online platforms in this ecosystem – both in perpetuating and amplifying this violence as well as attempting to combat it.
We argue that it is critical to investigate and understand the affective structures in place, and the operation of patriarchal hegemony that continues to create unsafe access to public spheres, both online and offline, for women. We also advocate for understanding technology design and identifying tools that can actually aid in combating TFGBV. Further, we point to the continued need for greater accountability from platforms, to mainstream gender related harms and combat it through diversified approaches.
Privacy Policy Framework for Indian Mental Health Apps
The report’s findings indicate a significant gap in the structure and content of privacy policies in Indian mental health apps. This highlights the need to develop a framework that can guide organisations in developing their privacy policies. Therefore, this report proposes a holistic framework to guide the development of privacy policies for mental health apps in India. It focuses on three key segments that are an essential part of the privacy policy of any mental health app. First, it must include factors considered essential by the Digital Personal Data Protection Act 2023 (DPDPA) such as consent mechanisms, rights of the data principal, provision to withdraw consent etc. Second, the privacy policy must state how the data provided by them to these apps will be used. Finally, developers must include key elements, such as provisions for third-party integrations and data retention policies.”
Click to download the full research paper here
Digital Rights and ISP Accountability in India: An Analysis of Policies and Practices
Read the full report here.
India's four largest Internet Service Providers (ISPs)—Reliance Jio, Bharti Airtel, Vodafone-Idea (Vi), and BSNL collectively serve 98% of India's internet subscribers, with Jio and Airtel commanding a dominant market share of 80.87%. The assessment comes at a critical juncture in India's digital landscape, marked by a 279.34% increase in internet subscribers from 2014 to 2024, alongside issues such as proliferation of internet shutdowns.
Adapting the Ranking Digital Rights' (RDR) 2022 methodology framework for its 2022 Telco Giants Scorecard, our analysis reveals significant disparities in governance structures and commitment to digital rights across these providers. Bharti Airtel emerges as the leader in governance framework implementation, maintaining dedicated human rights policies and board-level oversight. In contrast, Vi and Jio demonstrate mixed results with limited explicit human rights commitments, while BSNL exhibits the weakest governance structure with minimal human rights considerations. Notably, all ISPs lack comprehensive human rights impact assessments for their advertising and algorithmic systems.
The evaluation of freedom of expression commitments reveals systematic inadequacies across all providers. Terms and conditions are frequently fragmented and difficult to access, while providers maintain broad discretionary powers for account suspension or termination without clear appeal processes. There is limited transparency regarding content moderation practices and government takedown requests, coupled with insufficient disclosure about algorithmic decision-making systems that affect user experiences.
Privacy practices among these ISPs show minimal evolution since previous assessments, with persistent concerns about policy accessibility and comprehension. The investigation reveals limited transparency regarding algorithmic processing of personal data, widespread sharing of user data with third parties and government agencies, and inadequate user control over personal information. None of the evaluated ISPs maintain clear data breach notification policies, raising significant concerns about user data protection.
The concentrated market power of Jio and Airtel, combined with weak digital rights commitments across the sector, raises substantial concerns about the state of user privacy and freedom of expression in India's digital landscape. The lack of transparency in website blocking and censorship, inconsistent implementation of blocking orders, limited accountability in handling government requests, insufficient protection of user rights, and inadequate grievance redressal mechanisms emerge as critical areas requiring immediate attention.
As India continues its rapid digital transformation, our findings underscore the urgent need for both regulatory intervention and voluntary industry reforms. The development of standardised transparency reporting, strengthened user rights protections, and robust accountability mechanisms will be crucial in ensuring that India's digital growth aligns with fundamental rights and democratic values.
Do We Need a Separate Health Data Law in India?
Chapter 1.Background
Digitisation has become a cornerstone of India’s governance ecosystem since the National e-Governance Plan (NeGP) of 2006. This trend can also be seen in healthcare, especially during the COVID-19 pandemic, with initiatives like the Ayushman Bharat Digital Mission (ABDM). However, the digitisation of healthcare has been largely conducted without legislative backing or judicial oversight. This has resulted in inadequate grievance redressal mechanisms, potential data breaches, and threats to patient privacy.
Unauthorised access to or disclosure of health data can result in stigmatisation, mental and physical harassment, and discrimination against patients. Moreover, because of the digital divide, overdependence on digital health tools to deliver health services can lead to the exclusion of the most marginalised and vulnerable sections of society, thereby undermining the equitable availability and accessibility of health services. Health data in the digitised form is also vulnerable to cyberattacks and breaches. This was evidenced in the recent ransomware attack on All India Institute of Medical Science, which, apart from violating the right to privacy of patients, also brought patient care to a grinding halt.
In this context, and with the rise in health data collection and uptick in the use of AI in healthcare, there is a need to look at whether India needs a standalone legislation to regulate the digital health sphere. It is also necessary to evaluate whether the existing policies and regulations are sufficient, and if amendments to these regulations would suffice.
This report discusses the current definitions of health data including international efforts, the report then proceeds to share some key themes that were discussed at three roundtables we conducted in May, August, and October 2024. Participants included experts from diverse stakeholder groups, including civil society organisations, lawyers, medical professionals, and academicians. In this report, we collate the various responses to two main aspects, which were the focus of the roundtables:
- In which areas are the current health data policies and laws lacking in India?
- Do we need a separate health data law for India? What are the challenges associated with this? What are other ways in which health data can be regulated?
Chapter 2. How is health data defined?
There are multiple definitions of health data globally. These include those incorporated into the text of data protection legislations or under separate health data laws. In the European Union (EU), the General Data Protection Regulation defines “data concerning health” as personal data that falls under special category data. This includes data that requires stringent and special protection due to its sensitive nature. Data concerning health is defined under Article(Article 4[15]) as “personal data related to the physical or mental health of a natural person, including the provision of healthcare services, which reveal information about his or her health status”. The United States has the Health Insurance Portability and Accountability Act (HIPAA), which was created to make sure that the personally identifiable information (PII) gathered by healthcare and insurance companies is protected against fraud and theft and cannot be disclosed without consent. As per the World Health Organisation (WHO), ‘digital health’ refers to “a broad umbrella term encompassing eHealth, as well as emerging areas, such as the use of advanced computing sciences in ‘big data’, genomics and artificial intelligence”.
2.1. Current legal framework for regulating the digital healthcare ecosystem in India
In India the digital health data had been defined under the draft Digital Information Security in Healthcare Act (DISHA), 2017, as an electronic record of health-related information about an individual. and includes the following: (i) information concerning the physical or mental health of the individual; (ii) information concerning any health service provided to the individual; (iii) information concerning the donation by the individual of any body part or any bodily substance; (iv) information derived from the testing or examination of a body part or bodily substance of the individual; (v) information that is collected in the course of providing health services to the individual; or (vi) information relating to the details of the clinical establishment accessed by the individual.
However, DISHA was subsumed into the 2019 version of the Personal Data Protection Act, called The Data and Privacy Protection Bill, which had a definition of health data and a demarcation between sensitive personal data and personal data. Both these definitions are absent from the Digital Personal Data Protection Act (DPDPA), 2023. This makes uncertain what is defined as health data in India. It is also important to note that the health data management policies released during the pandemic relied on the definition of health data under the then draft of the Personal Data Protection Act.
(i) Drugs and Cosmetic Act, and Rules
At present, there is no specific law that regulates the digital health ecosystem in India. The ecosystem is currently regulated by a mix of laws regulating the offline/legacy healthcare system and policies notified by the government from time to time. The primary law governing the healthcare system in India is the Drugs and Cosmetics Act (DCA), 1940, read with the Drugs and Cosmetic Rules, 1945. These regulations govern the manufacture, sale, import, and distribution of drugs in India. The central and state governments are responsible for enforcing the DCA. In 2018, the central government published the Draft Rules to amend the Drugs and Cosmetics Rules in order to incorporate provisions relating to the sale of drugs by online pharmacies (Draft Rules). However, the final rules are yet to be notified. The Draft Rules prohibit online pharmacies from disclosing the prescriptions of patients to any third person. However, they also mandate the disclosure of such information to the central and state governments, as and when required for public health purposes.
(ii) Clinical Establishments (Registration and Regulation) Act, and Rules
The Clinical Establishments Rules, 2012, which are issued under the Clinical Establishments (Registration and Regulation) Act, 2010, require clinical establishments to maintain electronic health records (EHRs) in accordance with the standards determined by the central government. The Electronic Health Record (EHR) Standards, 2016, were formulated to create a uniform standards-based system for EHRs in India. They provide guidelines for clinical establishments to maintain health data records as well as data and security measures. Additionally, they also lay down that ownership of the data is vested with the individual, and the healthcare provider holds such medical data in trust for the individual.
(iii) Health digitisation policies under the National Health Authority
In 2017, the central government formulated the National Health Policy (NHP). A core component of the NHP is deploying technology to deliver healthcare services. The NHP recommends creating a National Digital Health Authority (NDHA) to regulate, develop, and deploy digital health across the continuum of care. In 2019, the Niti Aayog, proposed the National Digital Health Blueprint (Blueprint). The Blueprint recommended the creation of the National Digital Health Mission. The Blueprint made this proposition stating that “the Ministry of Health and Family Welfare has prioritised the utilisation of digital health to ensure effective service delivery and citizen empowerment so as to bring significant improvements in public health delivery”. It also stated that an institution such as the National Digital Health Mission (NDHM), which is undertaking significant reforms in health, should have legal backing.
(iv) Telemedicine Practice Guidelines
On 25 March 2020, the Telemedicine Practice Guidelines under the Indian Medical Council Act were notified. The Guidelines provide a framework for registered medical practitioners to follow for teleconsultations.
2.2. Digital Personal Data Protection Act, 2023
There has been much hope for India’s data protection legislation in India to cover definitions of health data, keeping in mind the removal of DISHA and the uptick in health digitisation in both the public and private health sectors. The privacy/data protection law, the DPDPA was notified on 12 August 2023. However, the provisions have still not come into force. So, currently, health data and patient medical history are regulated by the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules (SPDI Rules), 2011. The SPDI Rules will be replaced by the DPDA as and when its different provisions are enforced. On 3 January 2025, the Ministry of Electronics and Information Technology released the Draft Digital Personal Data Protection Rules, 2025, for public consultation. The last date for submitting the comments is 18 February 2025.
Health data is regarded as sensitive personal data under the SPDI Rules. Earlier drafts of the data protection legislation had demarcated data as personal data and sensitive personal data, and health data was regarded as sensitive personal data. However, the DPDA has removed the distinction between personal data and sensitive personal data. Instead, all data is regarded as personal data. Therefore, the extra protection that was previously afforded to health data has been removed. The Draft Rules also do not mention health data or provide any additional safeguards when it comes to protecting health data. However, it exempts healthcare professionals from the obligations that have been put on data fiduciaries when it comes to processing children’s data. The processing has to be restricted to the extent necessary to protect the health of the child.
As seen so far, while there are multiple healthcare-related regulations that govern stakeholders – from medical device manufacturers to medical professionals – there is still a vacuum in terms of the definition of health data. The DPDPA does not clarify this definition. Further, there are no clear guidelines for how these regulations work with one another, especially in the case of newer technologies like AI, which have already started disrupting the Indian health ecosystem.
Chapter 3. Key takeaways from the health data roundtables
The three health data roundtables covered various important topics related to health data governance in India. The first roundtable highlighted the major concerns and examined the granular details of considering a separate law for digital healthcare. The second round table featured a detailed discussion on the need for a separate law, or whether the existing laws can be modified to address extant concerns. There was also a conversation on whether the absence of a classification absolves organisations from the responsibility to protect or secure health data. Participants stated that due to the sensitivity of health data, data fiduciaries processing health data could qualify it as significant data fiduciary under the the proposed DPDPA Rules (that were at the time of hosting the roundtables) yet to be published. The final roundtable concluded with an in-depth discussion on the need for a health data law. However, no consensus has emerged among the different stakeholders.
The roundtables highlighted that the different stakeholders – medical professionals, civil society workers, academics, lawyers, and people working in startups – were indeed thinking about how to regulate health data. But there was no single approach that all agreed on.
3.1. Health data concerns
Here, we summarise the key points that emerged during the three roundtables. These findings shed light on concerns regarding the collection, sharing, and regulation of health data.
(i) Removal of sensitive personal data classification
In the second roundtable, there was a discussion on the removal of the definition of health data from the final version of the DPDPA, which also removed the provision for sensitive personal data; health data previously came under this category. One participant stated that differentiating between sensitive personal data and data was important, as sensitive personal data such as health data warrants more security. They further stated that without such a clear distinction, data such as health status and sexual history could be easily accessed. Participants also pointed out that given the current infrastructure of digital data, the security of personal data is not up to the mark. Hence a clear classification of sensitive and personal data would ensure that data fiduciaries collecting and processing sensitive personal data would have greater responsibility and accountability.
(ii) Definition of informed consent
The term ‘informed consent’ came up several times during the roundtable discussions. But there was no clarity on what it means. A medical professional stated that in their practice, informed consent applies only to treatment. However, if the patient’s data is being used for research, it goes through the necessary internal review board and ethics board for clearance. One participant mentioned that the Section 2(i) of the Mental Healthcare Act (MHA), 2017 defines informed consent as
consent given for a specific intervention, without any force, undue influence, fraud, threat, mistake or misrepresentation, and obtained after disclosing to a person adequate information including risks and benefits of, and alternatives to, the specific intervention in a language and manner understood by the person; a nominee to make a decision and consent on behalf of another person.
Neither the DPDA nor the Draft DPDPA Rules define informed consent. However, the Draft DPDA Rules state that the notice given by the data fiduciary to the data principal must use simple, plain language to provide the data principal with a full and transparent account of the information necessary so that they can provide informed consent to process their personal data.
A stakeholder pointed out that consent is taken without much nuance or the option for choice or nuance. Indeed, consent is often presented in non-negotiable terms, creating power imbalances and undermining patient autonomy. Suggested solutions include instituting granular and revocable consent mechanisms. This point also emerged during the third roundtable, where it was highlighted that consenting to a medical procedure was different from consenting to data being used to train AI. When a consent form that a patient or caregiver is asked to sign gives the relevant information and no choice but to sign, it creates a severe power imbalance. Participants also emphasised that there was a need to assess if consent was being used as a tool to enable more data-sharing or a mechanism for citizens to be given other rights, such as the reasonable expectation that their medical information would not be used for commercial interests, especially to their own detriment, just because they signed a form. One suggested way to tackle this is for there to be greater demarcation of the aspects a person could consent to. This would give people more control over the various ways in which their data is used.
(iii) Data sharing with third parties
Discussions also focused on the concerns about sharing health data with third parties, especially if the data is transferred outside India. Data is/can be shared with tech companies and research organisations. So the discussions highlighted the regulations and norms governing how such data sharing occurs despite the fragmented regulations. For instance:
- Indian Council of Medical Research (ICMR) Ethical guidelines for application of Artificial Intelligence in Biomedical Research and Healthcare mandate strict protocols for sharing health data, but these are not binding. They state that the sharing of health data by medical institutions with tech companies and collaborators, must go through the ICMR and Health Ministry’s Screening Committee. This committee has strict guidelines on how and how much data can be shared and how it needs to be shared. The process also requires that all PII is removed and only 10 percent of the total data is permitted to be shared with any collaborator outside of any Indian jurisdiction.
- Companies working internationally have to comply with global standards like the GDPR and HIPAA, highlighting the gaps in India’s domestic framework which leaves the companies uncertain of which regulations to comply with. There is a need to balance the interests of startups that require more data and better longitudinal health records, and the need for strong data protection, data minimisation, and storage limitation.
(iv) Inadequate healthcare infrastructure
With respect to the implementation challenges associated with health data laws, participants noted that, currently, the Indian healthcare infrastructure is not up to the mark. Moreover, smaller and rural hospitals are not yet on board with health digitisation and may not be able to comply with additional rules and responsibilities. In terms of capacity as well, smaller healthcare facilities lack the resources to implement and comply with complex regulations.
3.2. Regulatory challenges
Significant time was spent on discussing the regulatory challenges and deficiencies in India’s healthcare infrastructure. The discussion primarily revolved around the following points:
(i) State vs. central jurisdiction
Under the Constitutional Scheme, legislative responsibilities for various subjects are demarcated between the centre and the states, and are sometimes shared between them. The topics of public health and sanitation, hospitals, and dispensaries fall under the state list set out in the Seventh Schedule of the Constitution. This means that state governments have the primary responsibility of framing and implementing laws on these subjects. Under this, local governance institutions, namely local bodies, also play an important role in discharging public health responsibilities.
(ii) Do we bring back DISHA?
During the conversation about the need for the health data regulation, participants brought up that there had been an earlier push for a health data law in the form of DISHA, 2017. But this was later abandoned. DISHA aimed to set up digital health authorities at the national and state levels to implement privacy and security measures for digital health data and create a mechanism for the exchange of electronic health data. Another concern that arose with respect to having a central health data legislation was that, as health is a state subject, there could be confusion about having a separate, centralised regulatory body to oversee how the data is being handled. This might come with a lack of clarity on who would address what, or which ministry (in the state or central government) would handle the redressal mechanism.
3.3. Are the existing guidelines enough?
Participants highlighted that enacting a separate law to regulate digital health would be challenging, considering that the DPDPA took seven years to be enacted, the rules are yet to be drafted, and the Data Protection Board has not been established. Hence, any new legislation would take significant resources, including manpower and time.
In this context, there were discussions acknowledging that although the DPDPA does not currently regulate health data, there are other forms of regulation and policies that are prescribed for specific types of interventions when it comes to health data; for example, the Telemedicine Practice Guidelines, 2020, and the Medical Council of India Rules. These are binding on medical practitioners, with penalties for non-conforming, such as the revoking of medical licenses. Similarly the ICMR guidelines on the use of data in biomedical research include specific transparency measures, and existing obligations on health data collectors that would work irrespective of the lack of distinction between sensitive personal data and personal data under the DPDPA.
However, another participant rightly pointed out that the ICMR guidelines and the policies from the Ministry of Health and Family Welfare are not binding. Similarly, regulations like the Telemedicine Practice Guidelines and Indian Medical Council Act are only applicable to medical practitioners. There are now a number of companies that collect and process a lot of health data; they are not covered by these regulations. Although there are multiple regulations on healthcare and pharma, none of them cover or govern technology. The only relevant one is the Telemedicine Practice Guidelines, which say that AI cannot advise any patient; it can only provide support.
Chapter 4. Recommendations
Several key points were raised and highlighted during the three roundtables. There were also a few suggestions for how to regulate the digital health sphere. These recommendations and points can be classified into short-term measures and long-term measures.
4.1. Short-term measures
We propose two short-term measures, as follows:
(i) Make amendments to the DPDPA Introduce sector-specific provisions for health data within the existing framework. The provisions should include guidelines for informed consent, data security, and grievance redressal.
(ii) Capacity-building Provide training for healthcare providers and data fiduciaries on data security and compliance.
4.2. Long-term measures
We offer six long-term measures, as follows:
(i) Standalone legislation Enact a dedicated health data law that
- Defines health data and its scope; ● Establishes a regulatory authority for oversight; and
- Includes provisions for data sharing, security, and patient rights.
(ii) National Digital Health Authority
Establish a central authority, similar to the EU’s Health Data Space, to regulate and monitor digital health initiatives.
(ii) Cross-sectoral coordination
Develop mechanisms to align central and state policies and ensure seamless implementation.
(v) Technological safeguards
Encourage the development of AI-specific policies and guidelines to address the ethics of using health data.
(vi) Stringent measures to address data breaches
Increase the trust of people by addressing data breaches, and fostering proactive dialogue between patients, medical community, government and civil society. Reduce the exemption for data processing, such as that granted to the state for healthcare
Conclusion
The roundtable discussions highlighted the fragmented nature of the digital health sphere, and the issues that emanate from such a fractured polity. Considering the variations in the healthcare infrastructure and budget allocation across different states, the feasibility of enacting a central digital health law requires more in-depth research. The existing laws governing the offline/legacy health space also need careful examination to understand whether amendments to these laws are sufficient to regulate the digital health space.
The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development
With research assistance by Anuj Singh
I. Background
On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document.
CIS’ comments are inline with the submission guidelines, we have provided both comments and suggestions based on the headings and text provided in the report.
II. Governance of AI
The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now “perform complex tasks without active human control or supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the OECD AI principles which this report draws inspiration from.
A. AI Governance Principles
A proposed list of AI Governance principles (with their explanations) is given below.
While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken, to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.
B. Considerations to operationalise the principles
1. Examining AI systems using a lifecycle approach
The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. Chen et al. (2023), De Silva and Alahakoon (2022)) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others (Ng et al. (2022) have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s Responsible AI Playbook follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.
2. Taking an ecosystem-view of AI actors
While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body, it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static.
While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.
3. Leveraging technology for governance
While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the Bhumi programme in Andhra Pradesh that used blockchain for land records, however this also led to the weakening of local institutions, and also led to exclusion of marginalised people Kshetri (2021). It was also stated that there was a need to strengthen existing institutions before using a technological measure.
Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack tough exams in the USA. In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of confidential information.
Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.
III. GAP ANALYSIS
A. The need to enable effective compliance and enforcement of existing laws.
The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India are robust enough to deal with AI, and the change in technology and technology developers.
There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.
We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to lack of clarity.
Below we are adding our comments and suggestions certain subsections in this section on The need to enable effective compliance and enforcement of existing laws
1. Intellectual property rights
a. Training models on copyrighted data and liability in case of infringement
While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making non-expressive uses of work, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.
This report states “The Indian law permits a very closed list of activities in using copyrighted data without permission that do not constitute an infringement. Accordingly, it is clear that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act, 1957 is extremely narrow. Commercial research is not exempted; not-for-profit 10 institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete with the existing copyrighted work is exempted. “
Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research.
If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate. Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability
b. Copyrightability of work generated by using foundation models
We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.
C. The need for a whole-of-government approach.
While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms. The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on AI in healthcare indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.
Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.
IV. Recommendations
1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.
We would like to reiterate the earlier section and highlight the importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.
2.To develop a systems-level understanding of India’s AI ecosystem, MeitY should establish, and administratively house, a Technical Secretariat to serve as a technical advisory body and coordination focal point for the Committee/ Group.
The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making. The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making, there is also risk of regulatory capture. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.
3.To build evidence on actual risks and to inform harm mitigation, the Technical Secretariat should establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes.
The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.
It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual being denied health insurance because of AI bias. With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to The OECD AI Incidents Monitor.
4. To enhance transparency and governance across the AI ecosystem, the Technical Secretariat should engage the industry to drive voluntary commitments on transparency across the overall AI ecosystem and on baseline commitments for high capability/widely deployed systems.
It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.
While the transparency measures listed will ensure better understanding of processes of AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on AI data supply chain & auditability and healthcare in India, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.
5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.
It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), stated that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries.
We would also like to highlight that during the consultations on the DIA it was proposed to replace the Information Technology Act 2000. It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.
The Centre for Internet and Society’s comments and feedback to the: Digital Personal Data Protection Rules 2025
Rule 3 - Notice given by data fiduciary to data principal - Under Section 5(2) of the DPDP Act, when the personal data of the data principal has been processed before the commencement of the Act, then the data fiduciary is required to give notice to the data principal as soon as reasonably practicable. However, the Rules fail to specify what is meant by reasonably practicable. The timeline for a notice in such circumstances is unclear.
- In addition, under Rule 3(a) the phrase “be presented and be understandable independently” is ambiguous. It is not clear whether the consent notice has to be presented independently of any other information or whether it only needs to be independently understandable and can be presented along with other information.
- In addition to this we suggest that the need for “privacy by design” mentioned in the earlier drafts is brought back, with the focus on preventing deceptive design practices (dark patterns) being used while collecting data.
Rule 4 - Registration and obligations of Consent Manager- The concept of independent consent managers, similar to account aggregators in the financial sector, and consent manager platforms in the EU is a positive step. However, the Act and the Rules need to flesh out the interplay between the Data Fiduciary and the Consent Managers in a more detailed manner, for example, how does the data fiduciary know if a data principal is using a consent manager, and under what circumstances can the data fiduciary bypass the consent manager, what is the penalty/consequence, etc.
Rule 6 - Reasonable security safeguards - While we appreciate the guidance provided in terms of the measures for security such as “encryption, obfuscation or masking or the use of virtual tokens”, it would also be good to refer to the SPDI Rules and include the example of the The international Standard IS/ISO/IEC 27001 on Information Technology - Security Techniques - Information Security Management System as an illustration to guide data fiduciaries.
Rule 7 - Intimation of personal data breach - As per the Rules, the data fiduciary on becoming aware of any personal data breach is required to notify the data principal and the Data Protection Board without delay; a plain reading of this Rule suggests that data fiduciary has to report the breach almost immediately, and this could be a practical challenge. Further, the absence of any threshold (materiality, gravity of the breach, etc) for notifying the data principal means that the data fiduciary will have to inform the data principal about even an isolated data breach which may not have an impact on the data principal. In this context, we recommend the Rule be amended to state that the data fiduciary should be required to inform the Data Protection Board about every data breach, however the data principal should be informed depending on the gravity and materiality of the breach and when it is likely to result in high risk to the data principal.
- Whilst the Rules have provisions for intimation of data breach, there is no specific provision requiring the Data Fiduciary to take all steps necessary to ensure that the Data Fiduciary has taken all necessary measures to mitigate the risk arising out of the said breach. Although there is an obligation to report any such measures to the Data Principal (Rule 7(1)(c)) as well as to the DPBI (Rule 7(2)(b)(iii)), there is no positive obligation imposed on the Data Fiduciary to take any such mitigation measures. The Rules and the Act merely presume that the Data Fiduciary would take mitigation measures, perhaps that is the reason why there are notification requirements for such breach, however the Rules and the Act do not put any positive obligation on the Data Fiduciary to actually implement such measures. This would lead to a situation where a Data Fiduciary may not take any measures to mitigate the risks arising out of the data breach, and be in compliance with its legal obligations by merely notifying the Data Principal as well as the DPBI that no measures have been taken to mitigate the risks arising from the data breach. In addition, the SPDI Rules state that in an event of a breach the body corporate is required to demonstrate that they had implemented reasonable security standards. This provision could be incorporated in this Rule to emphasize on the need to implement robust security standards which is one of the ways to curb data breaches from happening, and ensure that there is a protocol to mitigate the breach.
Rule 10 - Verifiable consent for processing of personal data of child or of person with disability who has a lawful guardian - The two mechanisms provided under the Rules to verify the age and identity of parents pre-suppose a high degree of digital literacy on the part of the parents. They may either give or refuse consent without thinking too much about the consequences arising out of giving or not giving consent. As there is always a risk of individuals not providing the correct information regarding their age or their relationship with the child, platforms may have to verify every user’s age; thereby preventing users from accessing the platform anonymously. Further, there is also a risk of data maximisation of personal data rather than data minimisation; i.e parents may be required to provide far more information than required to prove their identity. One recommendation/suggestion that we propose is to remove the processing of children's personal data from the ambit of this law, and instead create a separate standalone legislation dealing with children’s digital rights. Another important issue to highlight here is the importance of the Digital Protection Board and its capacity to levy fines and impose strictures on the platforms. We have seen from examples from other countries that platforms are forced to redesign and provide for better privacy and data protection mechanisms when the regulator steps in and imposes high penalties
Rule 12 - Additional obligations of Significant Data Fiduciary - The Rules do not clarify which entities will be considered as a Significant Data Fiduciary, leaving that to the government notifications. This creates uncertainty for data fiduciaries, especially smaller organisations that might not be able to set up the mechanisms and people for conducting data protection impact assessment, and auditing. The Rule provides that SDFs will have to conduct an annual Data Protection Impact Assessment. While this is a step in the right direction, the Rules are currently silent on the granularity of the DPIA. Similarly for “audit” the Rules do not clarify what type of audit is needed and what the parameters are. It is therefore imperative that the government notifies the level of details that the DPIA and the audit need to go into in order to ensure that the SDFs actually address issues where their data governance practices are lacking and not use the DPIA as a whitewashing tactic.There is also a need to reduce some of the ambiguity with regards to the parameters, and responsibilities in order to make it easier for startups and smaller players to comply with the regulations. In addition, while there is a need to protect data and increase responsibility on organisations collecting sensitive data or large volumes of data, there is a need to look beyond compliance and look at ways that preserve the rights of the data principal. Hence significant data fiduciaries should also be given the added responsibility of collecting explicit consent from the data principal, and also have easier access for correction of data, grievance redressal and withdrawal of consent.
Rule 14 - Processing of personal data outside India - As per section 16 of the Act the government could, by notification, restrict the transfer of data to specific countries as notified. This system of a negative list envisaged under the Act appears to have been diluted somewhat by the use of the phrase “any foreign State” under the Rules. This ambiguity should be addressed and the language in the Rules may be altered to bring it in line with the Act. Further, the rules also appear to be ultra vires to the Act. As per the DPDP Act, personal data could be shared to outside India, except to countries which were on the negative list, however, the dilution of the provision through the rules appears to have now created a white list of countries; i.e. permissible list of countries to which data can be transferred.
Rule 15 Exemption from Act for research, archiving or statistical purposes- While creating an exception for research and statistical purposes is an understandable objective, the current wording of the provision is vague and subject to mischief. The objective behind the provision is to ensure that research activities are not hindered due to the requirements of taking consent, etc. as required under the Act. However the way the provision is currently drafted, it could be argued that a research lab or a research centre established by a large company, for e.g. Google, Meta, etc. could also seek exemptions from the provisions of this Act for conducting “research”. The research conducted may not be shared with the public in general and may be used by the companies that funded/established the research centre. Therefore there should be further conditions attached to this provision, that would keep such research centers outside the purview of the exemption. Conditions such as making the results of the research publicly available, public interest, etc. could be considered for this purpose.
Rule 22 - Calling for Information from data fiduciary or intermediary - This rule read with the seventh schedule appears to dilute the data minimisation and purpose limitation provisions provided for in the Act. The wide ambit of powers appears to be in contravention of the Supreme Court judgement in the Puttaswamy case, which places certain restrictions on the government while collecting personal data. This “omnibus” provision flouts guardrails like necessity and proportionality that are important to safeguard the fundamental right to privacy.
It should be clarified whether this rule is merely an enabling provision to facilitate sharing of information, and only designated competent authorities as per law can avail of this provision. Need for Confidentiality
Additionally, the rule mandates that the government may “require the Data Fiduciary or intermediary to not disclose” any request for information made under the Act. There is no requirement of confidentiality indicated in the governing section, i.e. section 36, from which Rule 22 derives its authority. Talking about the avoidance of secrecy in government business, the Supreme Court in the State of U.P. v. Raj Narain, (1975) 4 SCC 428 has held that
“In a government of responsibility like ours, where all the agents of the public must be responsible for their conduct, there can but few secrets. The people of this country have a right to know every public act, everything, that is done in a public way, by their public functionaries. They are entitled to know the particulars of every public transaction in all its bearing. The right to know, which is derived from the concept of freedom of speech, though not absolute, is a factor which should make one wary, when secrecy is claimed for transactions which can, at any rate, have no repercussions on public security (2). To cover with [a] veil [of] secrecy the common routine business, is not in the interest of the public. Such secrecy can seldom be legitimately desired. It is generally desired for the purpose of parties and politics or personal self-interest or bureaucratic routine. The responsibility of officials to explain and to justify their acts is the chief safeguard against oppression and corruption.”
In order to ensure that state interests are also protected, there may be an enabling provision whereby in certain instances confidentiality may be maintained, but there has to be a supervisory mechanism whereby such action may be judged on the anvil of legal propriety.
Education, Epistemologies and AI: Understanding the role of Generative AI in Education
Emotional Contagion: Theorising the Role of Affect in COVID-19 Information Disorder
By incorporating theoretical frameworks from psychology, sociology, and communication studies, we reveal the complex foundations of both the creation and consumption of misinformation. From this research, fear emerged as the predominant emotional driver in both the creation and consumption of misinformation, demonstrating how negative affective responses frequently override rational analysis during crises. Our findings suggest that effective interventions must address these affective dimensions through tailored digital literacy programs, diversified information sources on online platforms, and expanded multimodal misinformation research opportunities in India.
Click to download the research paper
The Cost of Free Basics in India: Does Facebook's 'walled garden' reduce or reinforce digital inequalities?
In 2015, Facebook introduced internet.org in India and it faced a lot of criticism. The programme was relaunched as the Free Basics programme, ostensibly to provide, free of cost, access to the Internet to the economically deprived section of society. The content, i.e. websites, were pre-selected by Facebook and was provided by third-party providers. Later, Telecom Regulatory Authority of India (TRAI) ruled in favor of net neutrality, banning the program in India. A crucial conversation in this debate was also about whether the Free Basics program was going to actually be helpful for those it set out to support.
This paper examines Facebook’s Free Basics programme and its perceived role in bridging digital divides, in the context of India, where it has been widely debated, criticized and finally banned in a ruling from Telecom Regulatory Authority of India (TRAI). While the debate on the Free Basics programme has largely been embroiled around the principles of network neutrality, this paper will try to examine it from an ICT4D perspective, embedding the discussion around key development paradigms.
This essay begins by introducing the Free Basics programme in India and the associated proceedings, following which existing literature is reviewed to explore the concept of development, the perceived role of ICT in development, thus laying the scope of this discussion. The essay then examines the question of whether the Free Basics programme reduces or reinforces digital inequality by looking at 3 development paradigms: (1) Construction of knowledge, power structures and virtual colonization in the Free Basics Programme, (2) A sub-internet of the marginalized: looking at second level digital divides and (3) the Capabilities Approach and premise of connectivity as a source of equality and freedom.
The essay concludes with a view that the need for digital access should be viewed as a subset of overall contextual development as opposed to programs unto themselves and taking purely techno-solutionist approaches. There is a requirement for effective needs identification as part of ICT4D research to locate the users at the center and not at the periphery of the discussions. Lastly, policymakers should look into the addressal of more basic concerns like that of access and connectivity and not just on solutions which can be claimed as “quick-wins” in policy implementation.
Mapping the Legal and Regulatory Frameworks of the Ad-Tech Ecosystem in India
In this paper, we try to map the legal and regulatory framework dealing with Advertising Technology (Adtech) in India as well as a few other leading jurisdictions. Our analysis is divided into three main parts, the first being general consumer regulations, which apply to all advertising irrespective of the media – to ensure that advertisements are not false or misleading and do not violate any laws of the country. This part also covers the consumer laws which are specific to malpractices in the technology sector such as Dark Patterns, Influencer based advertising, etc.
The second part of the paper covers data protection laws in India and how they are relevant for the Adtech industry. The Adtech industry requires and is based on the collection and processing of large amounts of data from the users. It is therefore important to discuss the data protection and consent requirements that have been laid out in the spate of recent data protection regulations, which have the potential to severely impact the Adtech industry.
The last part of the paper covers the competition angle of the Adtech industry. Like with social media intermediaries, the Adtech industry in the world is also dominated by two or three players and such a scenario always lends itself easily to anti-competitive practices. It is therefore imperative to examine the competition law framework to see whether the laws as they exist are robust enough to deal with any possible anti competitive practices that may be prevalent in the Adtech sector.
The research was reviewed by Pallavi Bedi, it can be accessed here.