Blog
Information Disorders and their Regulation
In the last few years, ‘fake news’ has garnered interest across the political spectrum, as affiliates of both the ruling party and its opposition have seemingly partaken in its proliferation. The COVID-19 pandemic added to this phenomenon, allowing for xenophobic, communal narratives, and false information about health-protective behaviour to flourish, all with potentially deadly effects. This report maps and analyses the government’s regulatory approach to information disorders in India and makes suggestions for how to respond to the issue.
In this study, we gathered information by scouring general search engines, legal databases, and crime statistics databases to cull out data on a) regulations, notifications, ordinances, judgments, tender documents, and any other legal and quasi-legal materials that have attempted to regulate ‘fake news’ in any format; and b) news reports and accounts of arrests made for allegedly spreading ‘fake news’. Analysing this data allows us to determine the flaws and scope for misuse in the existing system. It also gives us a sense of the challenges associated with regulating this increasingly complicated issue while trying to avoid the pitfalls of the present system.
Click to download the full report here.
Reconfiguring Data Governance: Insights from India and the EU

The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.
This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.
This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet & Society, India.
Click to download the report
India’s parental control directive and the need to improve stalkerware detection
This post was reviewed and edited by Amrita Sengupta.
Stalkerware is a form of surveillance targeted primarily at partners, employees and children in abusive relationships. These are software tools that enable abusers to spy on a person’s mobile device, allowing them to remotely access all data on the device, including calls, messages, photos, location history, browsing history, app data, and more. Stalkerware apps run hidden in the background without the knowledge or consent of the person being surveilled.[1] Such applications are easily available online and can be installed by anyone with little technical know-how and physical access to the device.
News reports indicate that the Ministry of Electronics and Information Technology (MeitY) is supporting the development of an app called “SafeNet”[2] that allows parents to monitor activity and set content filters on children’s devices. Following a directive from the Prime Minister’s office to “incorporate parental controls in data usage” by July 2024, the Internet Service Providers Association of India (ISPAI) has suggested that the app should come preloaded on mobile phones and personal computers sold in the country. The Department of Telecom is also asking schools to raise awareness about such parental control solutions.[3][4]
The beta version of the app is available for Android devices on the Google Play Store and advertises a range of functionalities including location access, monitoring website and app usage, call and SMS logs, screen time management and content filtering. The content filtering functionality warrants a separate analysis and this post will only focus on the surveillance capabilities of this app.
Applications like Safenet, that do not attempt to hide themselves and claim to operate with the knowledge of the person being surveilled, are sometimes referred to as “watchware”.[5] However, for all practical purposes, these apps are indistinguishable from stalkerware. They possess the same surveillance capabilities and can be deployed in the exact same ways. Such apps sometimes incorporate safeguards to notify users that their device is being monitored. These include persistent notifications on the device’s status bar or a visible app icon on the device’s home screen. However, such safeguards can be circumvented with little effort. The notifications can simply be turned off on some devices and there are third-party Android tools that allow app icons and notifications to be hidden from the device user, allowing watchware to be repurposed as stalkerware and operate secretly on a device. This leaves very little room for distinction between stalkerware and watchware apps.[6] In fact, the developers of stalkerware apps often advertise their tools as watchware, instructing users to only use them for legitimate purposes.
Even in cases where stalkerware applications are used in line with their stated purpose of monitoring minors’ internet usage, the effectiveness of a surveillance-centric approach is suspect. Our previous work on children’s privacy has questioned the treatment of all minors under the age of 18 as a homogenous group, arguing for a distinction between the internet usage of a 5-year-old child and a 17-year-old teenager. We argue that educating and empowering children to identify and report online harms is more effective than attempts to surveil them.[7][8] Most smartphones already come with options to enact parental controls on screen time and application usage[9][10], and the need for third-party applications with surveillance capabilities is not justified.
Studies and news reports show the increasing role of technology in intimate partner violence (IPV).[11][12] Interviews with IPV survivors and support professionals indicate an interplay of socio-technical factors, showing that abusers leverage the intimate nature of such relationships to gain access to accounts and devices to exert control over the victim. They also indicate the prevalence of “dual-use” apps such as child-monitoring and anti-theft apps that are repurposed by abusers to track victims.[13]
There is some data available that indicates the use of stalkerware apps in India. Kaspersky anti-virus’ annual State of Stalkerware reports consistently place India among the top 4 countries with the most number of infections detected by its product, with a few thousand infections reported each year between 2020 and 2023.[14][15][16[17] TechCrunch’s Spyware Lookup Tool, which compiles information from data leaks from more than nine stalkerware apps to notify victims, also identifies India as a hotspot for infections.[18] Avast, another antivirus provider, reported a 20% rise in the use of stalkerware apps during COVID-19 lockdowns.[19] The high rates of incidence of intimate partner violence in India, with the National Family Health Survey reporting that about a third of all married women aged 18–49 years have experienced spousal violence [20], also increases the risk of digitally-mediated abuse.
Survivors of digitally-mediated abuse often require specialised support in handling such cases to avoid alerting abusers and potential escalations. As part of our ongoing work on countering digital surveillance, we conducted an analysis of seven stalkerware applications, including two that are based in India, to understand and improve how survivors and support professionals can detect their presence on devices.
In some cases, where it is safe to operate the device, antivirus solutions can be of use. Antivirus tools can often identify the presence of stalkerware and watchware on a device, categorising them as a type of malware. We measured how effective various commercial antivirus solutions are at detecting stalkerware applications. Our results, which are detailed in the Appendix, indicate a reasonably good coverage, with six out of the seven apps being flagged as malicious by various antivirus solutions. We found that Safenet, the newest app on the list, was not detected by any antivirus. We also compared the detection results with a similar study conducted in 2019 [21] and found that some newer versions of previously known apps saw lower rates of detection. This indicates that antivirus solutions need to analyse new apps and newer versions of apps more frequently to improve coverage and understand how they are able to evade detection.
In cases where the device cannot be operated safely, support workers use specialised forensic tools such as the Mobile Verification Toolkit [22] and Tinycheck [23], which can be used to analyse devices without modifying them. We conducted malware analysis on the stalkerware apps to document the traces they leave on devices and submitted them to an online repository of indicators of compromise (IOCs).[24] These indicators are incorporated in detection tools used by experts to detect stalkerware infections.
Despite efforts to support survivors and stop the spread of stalkerware applications, the use of technology in abusive relationships continues to grow.[25] Making a surveillance tool like Safenet available for free, publicising it for widespread use, and potentially preloading it on mobile devices and personal computers sold in the country, is an ill-conceived way to enact parental controls and will lead to an increase in digitally-mediated abuse. The government should immediately take this application out of the public domain and work on developing alternate child protection policies that are not rooted in distrust and surveillance.
If you are affected by stalkerware there are some resources available here:
https://stopstalkerware.org/information-for-survivors/
https://stopstalkerware.org/resources/
Appendix
Our analysis covered two apps based in India, SafeNet and OneMonitar, and five other apps, Hoverwatch, TheTruthSpy, Cerberus, mSpy and FlexiSPY. All samples were directly obtained from the developer’s websites. The details of the samples are as follows:
|
Name |
File name |
Version |
Date sample was obtained |
SHA-1 Hash |
|
SafeNet |
Safenet_Child.apk |
0.15 |
16th March, 2024 |
d97a19dc2212112353ebd84299d49ccfe8869454 |
|
OneMonitar |
ss-kids.apk |
5.1.9 |
19th March, 2024 |
519e68ab75cd77ffb95d905c2fe0447af0c05bb2 |
|
Hoverwatch |
setup-p9a8.apk |
7.4.360 |
5th March, 2024 |
50bae562553d990ce3c364dc1ecf44b44f6af633 |
|
TheTruthSpy |
TheTruthSpy.apk |
23.24 |
5th March, 2024 |
8867ac8e2bce3223323f38bd889e468be7740eab |
|
Cerberus |
Cerberus_disguised.apk |
3.7.9 |
4th March, 2024 |
75ff89327503374358f8ea146cfa9054db09b7cb |
|
mSpy |
bt.apk |
7.6.0.1 |
21st March, 2024 |
f01f8964242f328e0bb507508015a379dba84c07 |
|
FlexiSPY |
5009_5.2.2_1361.apk |
5.2.2 |
26th March, 2024 |
5092ece94efdc2f76857101fe9f47ac855fb7a34 |
We analysed the network activity of these apps to check what web servers they send their data to. With increasing popularity of Content Delivery Networks (CDNs) and cloud infrastructure, these results may not always give us an accurate idea about where these apps originate, but can sometimes offer useful information:
| Name | Domain | IP Address[26] | Country | ASN Name and Number |
| SafeNet | safenet.family | 103.10.24.124 | India | Amrita Vishwa Vidyapeetham, AS58703 |
| OneMonitar | onemonitar.com | 3.15.113.141 | United States | Amazon.com, Inc., AS16509 |
| OneMonitar | api.cp.onemonitar.com | 3.23.25.254 | United States | Amazon.com, Inc., AS16509 |
| Hoverwatch | hoverwatch.com | 104.236.73.120 | United States | DigitalOcean, LLC, AS14061 |
| Hoverwatch | a.syncvch.com | 158.69.24.236 | Canada | OVH SAS, AS16276 |
| TheTruthSpy | thetruthspy.com | 172.67.174.162 | United States | Cloudflare, Inc., AS13335 |
| TheTruthSpy | protocol-a946.thetruthspy.com | 176.123.5.22 | Moldova | ALEXHOST SRL, AS200019 |
| Cerberus | cerberusapp.com | 104.26.9.137 | United States | Cloudflare, Inc., AS13335 |
| mSpy | mspy.com | 104.22.76.136 | United States | Cloudflare, Inc., AS13335 |
| mSpy | mobile-gw.thd.cc | 104.26.4.141 | United States | Cloudflare, Inc., AS13335 |
| FlexiSPY | flexispy.com | 104.26.9.173 | United States | Cloudflare, Inc., AS13335 |
| FlexiSPY | djp.bz | 119.8.35.235 | Hong Kong | HUAWEI CLOUDS, AS136907 |
To understand whether commercial antivirus solutions are able to categorise stalkerware apps as malicious, we used a tool called VirusTotal, which aggregates checks from over 70 antivirus scanners.[27] We uploaded hashes (i.e. unique signatures) of each sample to VirusTotal and recorded the total number of detections by various antivirus solutions. We compared our results to a similar study by Citizen Lab in 2019 [28] that looked at a similar set of apps to identify changes in detection rates over time.
|
Product |
VirusTotal Detections (March 2024) |
VirusTotal Detections (January 2019) (By Citizen Lab) |
|
SafeNet [29] |
0/67 (0 %) |
N/A |
|
OneMonitar [30] |
17/65 (26.1%) |
N/A |
|
Hoverwatch |
24/58 (41.4%) |
22/59 (37.3%) |
|
TheTruthSpy |
38/66 (57.6%) |
0 |
|
Cerberus |
8/62 (12.9%) |
6/63 (9.5%) |
|
mSpy |
8/63 (12.7%) |
20/63 (31.7%) |
|
Flexispy [31] |
18/66 (27.3%) |
34/63 (54.0%) |
We also checked if Google’s Play Protect service [32], a malware detection tool that is built-in to Android devices using Google’s Play Store. These results were also compared with similar checks performed by Citizen Lab in 2019.
|
Product |
Detected by Play Protect (March 2024) |
Detected by Play Protect (January 2019) (By Citizen Lab) |
|
SafeNet |
no |
N/A |
|
OneMonitar |
yes |
N/A |
|
Hoverwatch |
yes |
yes |
|
TheTruthSpy |
yes |
yes |
|
Cerberus |
yes |
no |
|
mSpy |
yes |
yes |
|
Flexispy |
yes |
yes |
Endnotes
1. Definition adapted from Coalition Against Stalkerware, https://stopstalkerware.org/
2. https://web.archive.org/web/20240316060649/https://safenet.family/
5. https://github.com/AssoEchap/stalkerware-indicators/blob/master/README.md
6. https://cybernews.com/privacy/difference-between-parenting-apps-and-stalkerware/
7. https://timesofindia.indiatimes.com/blogs/voices/shepherding-children-in-the-digital-age/
8. https://blog.avast.com/stalkerware-and-children-avast
9. https://safety.google/families/parental-supervision/
10. https://support.apple.com/en-in/105121
11. R. Chatterjee et al., "The Spyware Used in Intimate Partner Violence," 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 441-458.
13. D. Freed et al., "Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders", PACM: Human-Computer Interaction: Computer-Supported Cooperative Work and Social Computing (CSCW), vol. 1, no. 2, 2017.
18. https://techcrunch.com/pages/thetruthspy-investigation/
19. https://www.thenewsminute.com/atom/avast-finds-20-rise-use-spying-and-stalkerware-apps-india-during-lockdown-129155
20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10071919/
21. https://citizenlab.ca/docs/stalkerware-holistic.pdf
22. https://docs.mvt.re/en/latest/
23. https://tiny-check.com/
24. https://github.com/AssoEchap/stalkerware-indicators/pull/125
25. https://stopstalkerware.org/2023/05/15/report-shows-stalkerware-is-not-declining/
26. IP information provided by https://ipinfo.io/
27. https://docs.virustotal.com/docs/how-it-works
28. https://citizenlab.ca/docs/stalkerware-holistic.pdf
29. Sample was not known to VirusTotal, it was uploaded at the time of analysis
30. Sample was not known to VirusTotal, it was uploaded at the time of analysis
31. Sample was not known to VirusTotal, it was uploaded at the time of analysis
Consultation on Gendered Information Disorder in India
The event was convened by Amrita Sengupta (Research and Programme Lead, CIS), Yesha Tshering Paul (Researcher, CIS), Bishakha Datta (Programme Lead, POV) and Prarthana Mitra (Project Anchor, POV)..* Download the event report here.
The event brought together experts, researchers and grassroots activists from Maharashtra and across the country to discuss their experiences with information disorder, and the multifaceted challenges posed by misinformation, disinformation and malinformation targeting gender and sexual identities.
Understanding Information Disorders: The consultation commenced with a look at the wide spectrum of information disorder by Yesha Tshering Paul and Amrita Sengupta. Misinformation[1] was highlighted as false information disseminated unintentionally, such as inaccurate COVID cures that spread rapidly during the pandemic. In contrast, disinformation involves the intentional spread of false information to cause harm, exemplified by instances like deepfake pornography. A less recognized form, malinformation, involves the deliberate misuse of accurate information to cause harm, as seen in the misleading representation of regret rates among trans individuals who have undertaken gender affirming procedures. Yesha highlighted that the definitions of these concepts are often varied, and thus the importance of moving beyond definitions to centre user experiences of this phenomenon.
The central theme of this discussion was the concept of “gendered” information disorder, referring to the targeted dissemination of false or harmful online content based on gender and sexual identity. This form of digital misogyny intersects with other societal marginalizations, disproportionately affecting marginalised genders and sexualities. The session also emphasised the critical link between information disorders and gendered violence (both online and in real life). Such disorders perpetuate stereotypes, gender-based violence, and silences victims, fostering an environment that empowers perpetrators and undermines victims' experiences.
Feminist Digital Infrastructure: Digital infrastructures shape our online spaces. Sneha PP (Senior Researcher, CIS) introduced the concept of feminist infrastructures as a potential solution that helps mediate discourse around gender, sexuality, and feminism in the digital realm. Participant discussions emphasised the need for accessible, inclusive, and design-conscious digital infrastructures that consider the intersectionality and systemic inequalities impacting content creation and dissemination. Strategies were discussed to address online gender-based violence and misinformation, focusing on survivor-centric approaches and leveraging technology for storytelling.
Gendered Financial Mis-/Dis-information: Garima Agrawal (Researcher, CIS) with inputs by Debarati Das (Co-Lead, Capacity Building at PoV) and Chhaya Rajput (Helpline Facilitator, Tech Sakhi) led the session by highlighting gender disparities in digital and financial literacy and access to digital devices and financial services in India, despite women constituting a higher percentage of new internet users. This makes marginalised users more vulnerable to financial scams. Drawing from the ongoing financial harms project at CIS, Garima spoke about the diverse manifestations of financial information disorders arising from misleading information that results in financial harm, ranging from financial influencers (and in some cases deepfakes of celebrities) endorsing platforms they do not use, to fake or unregulated loan and investment services deceiving users. Breakout groups of participants then analysed several case studies of real-life financial frauds that targeted women and the queer community to identify instances of misinformation, disinformation and malinformation. Emotional manipulation and the exploitation of trust were identified as key tactics used to deceive victims, with repercussions extending beyond monetary loss to emotional, verbal, and even sexual violence against these individuals.
Fact-Checking Fake News and Stories: The pervasive issue of fake news in India was discussed in depth, especially in the era of widespread social media usage. Only 41% of Indians trust the veracity of the information encountered online. Aishwarya Varma, who works at Webqoof (The Quint’s fact checking initiative) as a Fact Check Correspondent, led an informative session detailing the various accessible tools that can be used to fact-check and debunk false information. Participants engaged in hands-on activities by using their smartphones for reverse image searches, emphasising the importance of verifying images and their sources. Archiving was identified as another crucial aspect to preserve accurate information and debunk misinformation.
Gendered Health Mis-/Dis-information: This participant-led discussion highlighted structural gender biases in healthcare and limited knowledge about mental health and menstrual health as significant concerns, along with the discrimination and social stigma faced by the LGBTQ+ community in healthcare facilities. One participant brought up their difficulty accessing sensitive and non-judgmental healthcare, and the insensitivity and mockery faced by them and other trans individuals in healthcare facilities. Participants suggested the increased need for government-funded campaigns on sexual and reproductive health rights and menstrual health, and the importance of involving marginalised communities in healthcare related decision-making to bring about meaningful change.
Mis-/Dis-information around Sex, Sexuality, and Sexual Orientation: Paromita Vohra, Founder and Creative Director of Agents of Ishq—a multi-media project about sex, love and desire that uses various artistic mediums to create informational material and an inclusive, positive space for different expressions of sex and sexuality—led this session. She started with an examination of the term “disorder” and its historical implications, and highlighted how religion, law, medicine, and psychiatry had previously led to the classification of homosexuality as a “disorder”. The session delved into the misconceptions surrounding sex and sexuality in India, advocating for a broader understanding that goes beyond colonial knowledge systems and standardised sex education. She brought up the role of media in altering perspectives on factual events, and the need for more initiatives like Agents of Ishq to address the need for culturally sensitive and inclusive sexuality language and education that considers diverse experiences, emotions, and identities.
Artificial Intelligence and Mis-/Dis-information: Padmini Ray Murray, Founder of Design Beku—a collective that emerged from a desire to explore how technology and design can be decolonial, local, and ethical— talked about the role of AI in amplifying information disorder and its ethical considerations, stemming from its biases in language representation and content generation. Hindi and regional Indian languages remain significantly under-represented in comparison to English content, leading to skewed AI-generated content. Search results reflect the gendered biases in AI and further perpetuate existing stereotypes and reinforce societal biases. She highlighted the real-world impacts of AI on critical decision-making processes such as loan approvals, and the influence of AI on public opinion via media and social platforms. Participants expressed concerns about the ethical considerations of AI, and emphasised the need for responsible AI development, clear policies, and collaborative efforts between tech experts, policymakers, and the public.
* The Centre for Internet and Society undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. Point of View focuses on sexuality, disability and technology to empower women and other marginalised genders to shape and inhabit digital spaces.
[1] Claire Wardle, Understanding Information Disorder (2020). https://firstdraftnews.org/long-form-article/understanding-information-disorder/.
Comments to the Draft Digital Competition Bill, 2024
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
At the outset, CIS affirms the Committee’s approach to transition from a predominantly ex-post to an ex-ante approach for regulating competition in digital markets. The Committee’s assessment of the ex-post regime being too time-consuming for the digital domain has been substantiated by frequent and expensive delays in antitrust disputes, a fact that has also recently drawn the attention of the Ministry of Corporate Affairs. And not just in India, the ex-post regime has been found to be too time-consuming in other jurisdictions as well, as a consequence of which many other countries are also moving towards an ex-post regime for digital markets. This also allows India to be in harmony with both developing and developed countries, which makes regulating global competition more consistent and efficient. In fact, “international cooperation between competition authorities” and “greater coherence between regulatory frameworks” are key in facilitating global investigations and lowering the cost of doing business.
Moreover, by adopting a principles-based approach to designing the law’s obligations, the draft Bill also addresses the concern that ex-ante regulations, due to their prescriptive nature, tend to be sector-agnostic. The fact that these principles are based on the findings of the Parliamentary Standing Committee’s (PSC) Report on ‘Anti-Competitive Practices by Big Tech Companies’ only lends them more evidence. The draft DCB empowers the Commission to clarify the Obligations for different services, and also provides CCI with the flexibility to undertake independent consultations to accommodate varying contexts and the needs of different core digital services. We do, however, have specific comments regarding implementing some of these provisions, which are elaborated in the accompanying document.
We would also like to emphasise that adequate enforcement of an ex-ante approach requires bolstering and strengthening regulatory capacity. Therefore, to minimise risks relating to underenforcement as well as overenforcement, CCI, its Digital Markets and Data Unit (DMDU), and the Director General’s (DG) office will have to substantially increase their technical capacity. A comparison of CCI’s current strength with its global counterparts that have adopted or are in the process of adopting an ex-ante approach to competition regulation reveals a stark picture. For example, the European Union (EU) had over 870 people in its DG COMP unit in 2022, and its DG CONNECT unit is expected to hire another 100 people in 2024 alone. Similarly, the United Kingdom’s Competition and Markets Authority (CMA) has a permanent staff of 800+, the Japan Fair Trade Commission (JTFC) has about 400 officials just for regulating anti-competitive conduct, and South Korea’s KFTC has about 600 employees. In contrast, CCI and DG, combined, have a sanctioned strength of only 195 posts, out of which 71 remain vacant. Bridging this capacity gap through frequent and high-quality recruitment is, therefore, the need of the hour. Most importantly, there is a need to create a culture of interdisciplinary coordination among legal, technical, and economic domains.
Moreover, as we come to rely on an increasingly digitised economy, most technology companies will work with critical technology components such as key infrastructure, algorithms, and Artificial Intelligence to business models that are based on data collection and processing practices. Consequently, there will be a need to bolster CCI’s capacity in the technical domain by hiring and integrating new roles including technologists, software and hardware engineers, product managers, UX designers, data scientists, investigative researchers, and subject matter experts dealing with new and emerging areas of technology.21 Therefore, we recommend CCI to ensure that the proposed DMDU has the requisite diversity of skills to effectively use existing tools for enforcement and is also able to keep pace with new and emerging technological developments.
Along with this overall observation of CCI's capacity, we have also submitted detailed comments on specific clauses of the draft DCB. These submissions are structured across the following six categories: i) Classification of Core Digital Services; ii) Designation of a Systemically Significant Digital Enterprise (SSDE) and Associate Digital Enterprise (ADE); iii) Obligations on SSDEs and ADEs; iv) Powers of the Commission to Conduct an Inquiry; v) Penalties and Appeals; and vi) Powers of the Central Government. In addition to these suggestions, the detailed comments and their summarised version focus on three important gaps in the draft DCB – limited representation from workers’ groups and MSMEs, exclusion of merger and acquisition (M&A) from the discussions, and lack of a formalised framework for interregulatory coordination.
For our full comments, click here
For a detailed summary of our comments, click here
A Guide to Navigating Your Digital Rights
The Digital Rights Guide gives practical guidance on the laws and procedures that affect internet freedoms. It covers the following topics:
- Internet Shutdowns
- Content Takedown
- Surveillance
- Device Seizure
The Digital Rights Guide can be viewed here.
Legal Advocacy Manual
Click to download the manual.
Draft Circular on Digital Lending – Transparency in Aggregation of Loan Products from Multiple Lenders
Edited and reviewed by Amrita Sengupta
The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on the internet and digital technologies from policy and academic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and practices around the internet, technology and society in India, and elsewhere.
CIS is grateful for the opportunity to submit comments on the “Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders” to the Reserve Bank of India. Over the last twelve years, CIS has worked extensively on research around privacy, online safety, cross border flows of data, security, and innovation. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem.
Introduction
The draft circular on ‘Transparency in Aggregation of Loan Products from Multiple Lenders’ is a much needed and timely document that builds on the Guidelines on Digital Lending. Both documents have maintained the principles of customer centricity and transparency at their core. Reducing information asymmetry and deceptive patterns in the digital lending ecosystem is of utmost importance, given the adverse effects experienced by borrowers. Digital lending is one of the fastest-growing fintech segments in India,[1] having grown exponentially from nine billion U.S. dollars in 2012 to nearly 150 billion dollars by 2020, and is estimated to reach 515 billion USD by 2030.[2] At the same time, accessing digital credit through digital lending applications has been found to be associated with a high risk to financial and psychological health due to a host of practices that lead to overindebtedness.[3] These include post contract exploitation through hidden transaction fees, abusive debt collection practices, privacy violations and fluctuations in interest rates. Both illegal/fraudulent and licensed lending service providers have been employing aggressive marketing and debt collection tactics[4] that exacerbate the risks of all the above harms.[5] With additional safeguards in place, the guidelines can provide a suitable framework to ensure borrowers have the opportunity and information needed to make an informed decision while accessing intermediated credit, and reduce harmful financial and health related consequences.
In this submission, we seek to provide some comments on the broader issues the guidelines address. Our comments recommend additional safeguards, keeping in mind the gamut of services provided by lending service providers (LSPs). We will frame our comment around two main concerns addressed by the draft guidelines: 1) reducing information asymmetry 2) market fairness. In addition to this we will share comments around a third concern that requires additional scrutiny, i.e. 3) data privacy and security.
Reducing Information Asymmetry
The guidelines aim to define responsibilities of LSPs in maintaining transparency to ensure borrowers are aware of the identity of the regulated entity (RE) providing the loan, and make informed decisions based on consistent information to weigh their options.
Comments: Guideline iii suggests that the digital view should include information that helps the borrower to compare various loan offers. This includes “the name(s) of the regulated entity (RE) extending the loan offer, amount and tenor of loan, the Annual Percentage Rate (APR) and other key terms and conditions” alongside a link to the key facts statement (KFS). The earlier ‘Guidelines on Digital Lending’ specifies that APR should be an all-inclusive cost including margin, credit costs, operating costs, verification charges, processing fees etc. only excluding penalties, and late payment charges.
Recommendations: All users of digital lending services may not be aware that APR is inclusive of all non-contingent charges. Requiring digital loan aggregators to provide messages/notifications boosting consumer awareness of regulations and their rights can help reduce violations. We also recommend that this information is made available in various languages such that a wide range of users are able to access this information. Further we recommend that accountability be laid on the LSPs to adhere to an inclusive platform design that allows for easy access to this information.
Market Fairness
Guidelines ii-iv also serve to outline practices to curb anti-competitive placement of digital loan products through regulating use of dark patterns and increasing transparency.
Comments: Section ii mandates that LSPs must disclose the approach utilised to determine the willingness of lenders to offer a loan. Whether this estimation includes factors associated with the customer profile like age, income and occupation etc. should be clearly disclosed as well.
Recommendations: Alongside the predictive estimate of the lender’s willingness, to improve transparency loan aggregators may be asked to share an overall rate of rejection or approval as well within the digital view.
While the ‘Guidelines on Digital Lending’[6] clearly state that LSPs must charge any fees from the REs and not the borrowers, further clarification should be provided on whether LSPs can charge fees for loan aggregation services themselves, i.e. for providing information of available loan products.
Privacy and Data Security
The earlier ‘Guidelines on Digital Lending’[7] require LSPs to only store minimal contact data regarding the customer and provide consumers the ability to seek their data being removed, i.e. the right to be forgotten by the provider, once they are no longer seeking their services. Personal financial information is not to be stored by LSPs. It is the responsibility of REs to ensure that LSPs do not store extraneous customer data, and to stipulate clear policy guidelines regarding the storage and use of customer data.
Comments: It is important to ascertain the nature of anonymised and personally identifiable customer data that may be currently utilised by LSPs or processed on their platforms, in the course of providing a range of services within the digital credit ecosystem to borrowers and lenders.
Certain functions that loan aggregators perform may expand their role beyond a simple intermediary. LSPs also provide services assessing borrower’s creditworthiness, payment services, and agent-led debt collection services for lenders. Some LSPs may be involved in more than one stage of the loan process which may make them privy to additional personal information about a borrower. There may be cases in which a consumer registers on an LSP’s platform without going ahead with any loan applications. It is unclear who is responsible for maintaining data security and privacy or providing grievance redressal at these times.
Section ii allows them to provide estimates of lenders’ willingness to borrowers. Some LSPs connecting REs with borrowers may also provide services using alternative and even non-financial data to assess the creditworthiness of thin-file credit seekers. Whether there are any restrictions on the use of AI tools in these processes, and the handling of customer data should also be clarified or limited. The right to be forgotten may be difficult to enforce with the use of certain machine learning and other artificial intelligence models. As innovation in credit scoring mechanisms continues, it is also important to bring such financial service providers under the ambit of guidelines for digital lending platforms.
Recommendations: The burden of maintaining privacy and data security should fall on aggregators of loan products in addition to regulated entities as well. Include guidelines limiting the use of PII (and PFI if applicable) for purposes other than connecting borrowers to a loan provider without consumer consent. Informed and explicit consumer consent should be sought for any additional purposes like marketing, market research, product development, cross-selling, delivery of other financial and commercial services, including providing access to other loan products in the future.
Often consumers are required to register on a platform by providing contact details and other personal information. An initial digital view of loan products available could be displayed for all users without registering to help borrowers determine whether they would like to register for the LSP’s services. This can help reduce the amount of consumer contact information and other personally identifiable information (PII) that is collected by LSPs.
Emerging Risks
Emerging consumer risks within the digital lending ecosystem expose borrowers to additional risks like over-indebtedness and risks arising from fraud, data misuse, lack of transparency and inadequate redress mechanisms.[8] These draft guidelines clearly layout mechanisms to reduce risks arising from lack of transparency. Similar efforts need to be put behind reduction of data misuse by delimiting the time period and – and the risk for overindebtedness
One of the biggest sources of consumer risk has been at the debt recovery stage. Aggressive debt collection practices have had deleterious effects on consumers’ mental health, social standing and even lead some to consider suicide. Extant guidelines assume a recovery agent will be contacting the consumer.[9] LSPs may also set up automated payments and use digital communication like app notifications, messages and automated calls in the debt recovery process as well. The impact of repeated notifications and automated debt payments also needs to be considered in future iterations of guidelines addressing risk in the digital lending ecosystem.
[1] “Funding distribution of FinTech companies in India in second quarter of 2023, by segment”, Statista, accessed 30 May 2024, https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/
[2] Anushka Sengupta, “India’s digital lending market likely to grow $515 bn by 2030: Report”, Economic Times, 17 June 2023, https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337
[3] “Mobile Instant Credit: Impacts, Challenges, and Lessons for Consumer Protection”, Center for Effective Global Action, September 2023, https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf
[4] Jinit Parmar, “Ruthless Recovery Agents, Aggressive Loan Outreach Put the Spotlight on Bajaj Finance”, Moneycontrol, 18 April 2023, https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html
[5] Prudhviraj Rupavath, “Suicide Deaths Mount after Unregulated Lending Apps Resort to Exploitative Recovery Practices”, Newsclick, 26 December 2020 https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices
Priti Gupta and Ben Morris, “India's loan scams leave victims scared for their lives”, BBC, 7 June 2022, https://www.bbc.com/news/business-61564038
[6] Section 4.1, Guidelines on Digital Lending, 2022.
[7] Section 11, Guidelines on Digital Lending, 2022.
[8] “The Evolution of the Nature and Scale of DFS Consumer Risks: A Review of Evidence”, CGAP, February 2022, https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf
[9] Section 2, Outsourcing of Financial Services - Responsibilities of regulated entities employing Recovery Agents, 2022.
Online Censorship: Perspectives From Content Creators and Comparative Law on Section 69A of the Information Technology Act
This paper was reviewed by Krishnesh Bapat and Torsha Sarkar.
Abstract: The Government of India has increasingly engaged in online censorship using powers in the Information Technology Act. The law lays out a procedure for online censorship that relies solely on the discretion of the executive. Using a constitutional and comparative legal analysis, we contend that the law has little to no oversight and lacks adequate due process for targets of censorship. Through semi-structured interviews with individuals whose content has been taken down by such orders, we shed light on experiences of content owners with government-authorised online censorship. We show that legal concerns about the lack of due process are confirmed empirically, and content owners are rarely afforded an opportunity for a hearing before they are censored. The law enabling online censorship (and its implementation) may be considered unconstitutional in how it inhibits avenues of remedy for targets of censorship or for the general public. We also show that online content blocking has far-reaching, chilling effects on the freedom of expression.
The paper is available on SSRN, and can also be downloaded here.
AI for Healthcare: Understanding Data Supply Chain and Auditability in India
Read our full report here.
The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.
For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.
Our primary research questions are as follows:
-
What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?
-
What auditing practices, if any, are being followed by technology companies and healthcare institutions?
-
What best practices can organisations based in India adopt to improve AI auditability?
This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:
-
Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance on datasets from the Global North.
-
Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.
-
Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.
-
There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.
-
Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.
-
The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.
Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:
-
Improve data management across the AI data supply chain
Adopt standardised data-sharing policies. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare.
Emphasise not just data quantity but also data quality. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.
-
Streamline AI auditing as a form of governance
Standardise the practice of AI auditing. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.
Build organisational knowledge and inter-stakeholder collaboration. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.
Prioritise transparency and public accountability in auditing standards. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.
-
Centre public good in India’s AI industrial policy
Adopt focused and transparent approaches to investing in and financing AI projects. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.
Strengthen regulatory checks and balances for AI governance.
While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.
Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper
Read the full paper here.
Political participation of women is fundamental to democratic processes and promotes building of more equitable and just futures. Rapid adoption of technology has created avenues for women to access the virtual public sphere, where they may have traditionally struggled to access the physical public spaces, due to patriarchal norms and violence in the physical sphere. While technology has provided tools for political participation, information seeking, and mobilization, it has also created unsafe online spaces for women, thus often limiting their ability to actively engage online.
This essay examines the emotional and technological underpinnings of gender-based violence faced by women in politics. It further explores how gender-based violence is weaponised to diminish the political participation and influence of women in the public eye. Through real-life examples of gendered disinformation and sexist hate speech targeting women in politics in India, we identify affective patterns in the strategies deployed to adversely impact public opinion and democratic processes. We highlight the emotional triggers that play a role in exacerbating online gendered harms, particularly for women in public life. We also examine the critical role of technology and online platforms in this ecosystem – both in perpetuating and amplifying this violence as well as attempting to combat it.
We argue that it is critical to investigate and understand the affective structures in place, and the operation of patriarchal hegemony that continues to create unsafe access to public spheres, both online and offline, for women. We also advocate for understanding technology design and identifying tools that can actually aid in combating TFGBV. Further, we point to the continued need for greater accountability from platforms, to mainstream gender related harms and combat it through diversified approaches.
Privacy Policy Framework for Indian Mental Health Apps
The report’s findings indicate a significant gap in the structure and content of privacy policies in Indian mental health apps. This highlights the need to develop a framework that can guide organisations in developing their privacy policies. Therefore, this report proposes a holistic framework to guide the development of privacy policies for mental health apps in India. It focuses on three key segments that are an essential part of the privacy policy of any mental health app. First, it must include factors considered essential by the Digital Personal Data Protection Act 2023 (DPDPA) such as consent mechanisms, rights of the data principal, provision to withdraw consent etc. Second, the privacy policy must state how the data provided by them to these apps will be used. Finally, developers must include key elements, such as provisions for third-party integrations and data retention policies.”
Click to download the full research paper here
Digital Rights and ISP Accountability in India: An Analysis of Policies and Practices
Read the full report here.
India's four largest Internet Service Providers (ISPs)—Reliance Jio, Bharti Airtel, Vodafone-Idea (Vi), and BSNL collectively serve 98% of India's internet subscribers, with Jio and Airtel commanding a dominant market share of 80.87%. The assessment comes at a critical juncture in India's digital landscape, marked by a 279.34% increase in internet subscribers from 2014 to 2024, alongside issues such as proliferation of internet shutdowns.
Adapting the Ranking Digital Rights' (RDR) 2022 methodology framework for its 2022 Telco Giants Scorecard, our analysis reveals significant disparities in governance structures and commitment to digital rights across these providers. Bharti Airtel emerges as the leader in governance framework implementation, maintaining dedicated human rights policies and board-level oversight. In contrast, Vi and Jio demonstrate mixed results with limited explicit human rights commitments, while BSNL exhibits the weakest governance structure with minimal human rights considerations. Notably, all ISPs lack comprehensive human rights impact assessments for their advertising and algorithmic systems.
The evaluation of freedom of expression commitments reveals systematic inadequacies across all providers. Terms and conditions are frequently fragmented and difficult to access, while providers maintain broad discretionary powers for account suspension or termination without clear appeal processes. There is limited transparency regarding content moderation practices and government takedown requests, coupled with insufficient disclosure about algorithmic decision-making systems that affect user experiences.
Privacy practices among these ISPs show minimal evolution since previous assessments, with persistent concerns about policy accessibility and comprehension. The investigation reveals limited transparency regarding algorithmic processing of personal data, widespread sharing of user data with third parties and government agencies, and inadequate user control over personal information. None of the evaluated ISPs maintain clear data breach notification policies, raising significant concerns about user data protection.
The concentrated market power of Jio and Airtel, combined with weak digital rights commitments across the sector, raises substantial concerns about the state of user privacy and freedom of expression in India's digital landscape. The lack of transparency in website blocking and censorship, inconsistent implementation of blocking orders, limited accountability in handling government requests, insufficient protection of user rights, and inadequate grievance redressal mechanisms emerge as critical areas requiring immediate attention.
As India continues its rapid digital transformation, our findings underscore the urgent need for both regulatory intervention and voluntary industry reforms. The development of standardised transparency reporting, strengthened user rights protections, and robust accountability mechanisms will be crucial in ensuring that India's digital growth aligns with fundamental rights and democratic values.
Do We Need a Separate Health Data Law in India?
Chapter 1.Background
Digitisation has become a cornerstone of India’s governance ecosystem since the National e-Governance Plan (NeGP) of 2006. This trend can also be seen in healthcare, especially during the COVID-19 pandemic, with initiatives like the Ayushman Bharat Digital Mission (ABDM). However, the digitisation of healthcare has been largely conducted without legislative backing or judicial oversight. This has resulted in inadequate grievance redressal mechanisms, potential data breaches, and threats to patient privacy.
Unauthorised access to or disclosure of health data can result in stigmatisation, mental and physical harassment, and discrimination against patients. Moreover, because of the digital divide, overdependence on digital health tools to deliver health services can lead to the exclusion of the most marginalised and vulnerable sections of society, thereby undermining the equitable availability and accessibility of health services. Health data in the digitised form is also vulnerable to cyberattacks and breaches. This was evidenced in the recent ransomware attack on All India Institute of Medical Science, which, apart from violating the right to privacy of patients, also brought patient care to a grinding halt.
In this context, and with the rise in health data collection and uptick in the use of AI in healthcare, there is a need to look at whether India needs a standalone legislation to regulate the digital health sphere. It is also necessary to evaluate whether the existing policies and regulations are sufficient, and if amendments to these regulations would suffice.
This report discusses the current definitions of health data including international efforts, the report then proceeds to share some key themes that were discussed at three roundtables we conducted in May, August, and October 2024. Participants included experts from diverse stakeholder groups, including civil society organisations, lawyers, medical professionals, and academicians. In this report, we collate the various responses to two main aspects, which were the focus of the roundtables:
- In which areas are the current health data policies and laws lacking in India?
- Do we need a separate health data law for India? What are the challenges associated with this? What are other ways in which health data can be regulated?
Chapter 2. How is health data defined?
There are multiple definitions of health data globally. These include those incorporated into the text of data protection legislations or under separate health data laws. In the European Union (EU), the General Data Protection Regulation defines “data concerning health” as personal data that falls under special category data. This includes data that requires stringent and special protection due to its sensitive nature. Data concerning health is defined under Article(Article 4[15]) as “personal data related to the physical or mental health of a natural person, including the provision of healthcare services, which reveal information about his or her health status”. The United States has the Health Insurance Portability and Accountability Act (HIPAA), which was created to make sure that the personally identifiable information (PII) gathered by healthcare and insurance companies is protected against fraud and theft and cannot be disclosed without consent. As per the World Health Organisation (WHO), ‘digital health’ refers to “a broad umbrella term encompassing eHealth, as well as emerging areas, such as the use of advanced computing sciences in ‘big data’, genomics and artificial intelligence”.
2.1. Current legal framework for regulating the digital healthcare ecosystem in India
In India the digital health data had been defined under the draft Digital Information Security in Healthcare Act (DISHA), 2017, as an electronic record of health-related information about an individual. and includes the following: (i) information concerning the physical or mental health of the individual; (ii) information concerning any health service provided to the individual; (iii) information concerning the donation by the individual of any body part or any bodily substance; (iv) information derived from the testing or examination of a body part or bodily substance of the individual; (v) information that is collected in the course of providing health services to the individual; or (vi) information relating to the details of the clinical establishment accessed by the individual.
However, DISHA was subsumed into the 2019 version of the Personal Data Protection Act, called The Data and Privacy Protection Bill, which had a definition of health data and a demarcation between sensitive personal data and personal data. Both these definitions are absent from the Digital Personal Data Protection Act (DPDPA), 2023. This makes uncertain what is defined as health data in India. It is also important to note that the health data management policies released during the pandemic relied on the definition of health data under the then draft of the Personal Data Protection Act.
(i) Drugs and Cosmetic Act, and Rules
At present, there is no specific law that regulates the digital health ecosystem in India. The ecosystem is currently regulated by a mix of laws regulating the offline/legacy healthcare system and policies notified by the government from time to time. The primary law governing the healthcare system in India is the Drugs and Cosmetics Act (DCA), 1940, read with the Drugs and Cosmetic Rules, 1945. These regulations govern the manufacture, sale, import, and distribution of drugs in India. The central and state governments are responsible for enforcing the DCA. In 2018, the central government published the Draft Rules to amend the Drugs and Cosmetics Rules in order to incorporate provisions relating to the sale of drugs by online pharmacies (Draft Rules). However, the final rules are yet to be notified. The Draft Rules prohibit online pharmacies from disclosing the prescriptions of patients to any third person. However, they also mandate the disclosure of such information to the central and state governments, as and when required for public health purposes.
(ii) Clinical Establishments (Registration and Regulation) Act, and Rules
The Clinical Establishments Rules, 2012, which are issued under the Clinical Establishments (Registration and Regulation) Act, 2010, require clinical establishments to maintain electronic health records (EHRs) in accordance with the standards determined by the central government. The Electronic Health Record (EHR) Standards, 2016, were formulated to create a uniform standards-based system for EHRs in India. They provide guidelines for clinical establishments to maintain health data records as well as data and security measures. Additionally, they also lay down that ownership of the data is vested with the individual, and the healthcare provider holds such medical data in trust for the individual.
(iii) Health digitisation policies under the National Health Authority
In 2017, the central government formulated the National Health Policy (NHP). A core component of the NHP is deploying technology to deliver healthcare services. The NHP recommends creating a National Digital Health Authority (NDHA) to regulate, develop, and deploy digital health across the continuum of care. In 2019, the Niti Aayog, proposed the National Digital Health Blueprint (Blueprint). The Blueprint recommended the creation of the National Digital Health Mission. The Blueprint made this proposition stating that “the Ministry of Health and Family Welfare has prioritised the utilisation of digital health to ensure effective service delivery and citizen empowerment so as to bring significant improvements in public health delivery”. It also stated that an institution such as the National Digital Health Mission (NDHM), which is undertaking significant reforms in health, should have legal backing.
(iv) Telemedicine Practice Guidelines
On 25 March 2020, the Telemedicine Practice Guidelines under the Indian Medical Council Act were notified. The Guidelines provide a framework for registered medical practitioners to follow for teleconsultations.
2.2. Digital Personal Data Protection Act, 2023
There has been much hope for India’s data protection legislation in India to cover definitions of health data, keeping in mind the removal of DISHA and the uptick in health digitisation in both the public and private health sectors. The privacy/data protection law, the DPDPA was notified on 12 August 2023. However, the provisions have still not come into force. So, currently, health data and patient medical history are regulated by the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules (SPDI Rules), 2011. The SPDI Rules will be replaced by the DPDA as and when its different provisions are enforced. On 3 January 2025, the Ministry of Electronics and Information Technology released the Draft Digital Personal Data Protection Rules, 2025, for public consultation. The last date for submitting the comments is 18 February 2025.
Health data is regarded as sensitive personal data under the SPDI Rules. Earlier drafts of the data protection legislation had demarcated data as personal data and sensitive personal data, and health data was regarded as sensitive personal data. However, the DPDA has removed the distinction between personal data and sensitive personal data. Instead, all data is regarded as personal data. Therefore, the extra protection that was previously afforded to health data has been removed. The Draft Rules also do not mention health data or provide any additional safeguards when it comes to protecting health data. However, it exempts healthcare professionals from the obligations that have been put on data fiduciaries when it comes to processing children’s data. The processing has to be restricted to the extent necessary to protect the health of the child.
As seen so far, while there are multiple healthcare-related regulations that govern stakeholders – from medical device manufacturers to medical professionals – there is still a vacuum in terms of the definition of health data. The DPDPA does not clarify this definition. Further, there are no clear guidelines for how these regulations work with one another, especially in the case of newer technologies like AI, which have already started disrupting the Indian health ecosystem.
Chapter 3. Key takeaways from the health data roundtables
The three health data roundtables covered various important topics related to health data governance in India. The first roundtable highlighted the major concerns and examined the granular details of considering a separate law for digital healthcare. The second round table featured a detailed discussion on the need for a separate law, or whether the existing laws can be modified to address extant concerns. There was also a conversation on whether the absence of a classification absolves organisations from the responsibility to protect or secure health data. Participants stated that due to the sensitivity of health data, data fiduciaries processing health data could qualify it as significant data fiduciary under the the proposed DPDPA Rules (that were at the time of hosting the roundtables) yet to be published. The final roundtable concluded with an in-depth discussion on the need for a health data law. However, no consensus has emerged among the different stakeholders.
The roundtables highlighted that the different stakeholders – medical professionals, civil society workers, academics, lawyers, and people working in startups – were indeed thinking about how to regulate health data. But there was no single approach that all agreed on.
3.1. Health data concerns
Here, we summarise the key points that emerged during the three roundtables. These findings shed light on concerns regarding the collection, sharing, and regulation of health data.
(i) Removal of sensitive personal data classification
In the second roundtable, there was a discussion on the removal of the definition of health data from the final version of the DPDPA, which also removed the provision for sensitive personal data; health data previously came under this category. One participant stated that differentiating between sensitive personal data and data was important, as sensitive personal data such as health data warrants more security. They further stated that without such a clear distinction, data such as health status and sexual history could be easily accessed. Participants also pointed out that given the current infrastructure of digital data, the security of personal data is not up to the mark. Hence a clear classification of sensitive and personal data would ensure that data fiduciaries collecting and processing sensitive personal data would have greater responsibility and accountability.
(ii) Definition of informed consent
The term ‘informed consent’ came up several times during the roundtable discussions. But there was no clarity on what it means. A medical professional stated that in their practice, informed consent applies only to treatment. However, if the patient’s data is being used for research, it goes through the necessary internal review board and ethics board for clearance. One participant mentioned that the Section 2(i) of the Mental Healthcare Act (MHA), 2017 defines informed consent as
consent given for a specific intervention, without any force, undue influence, fraud, threat, mistake or misrepresentation, and obtained after disclosing to a person adequate information including risks and benefits of, and alternatives to, the specific intervention in a language and manner understood by the person; a nominee to make a decision and consent on behalf of another person.
Neither the DPDA nor the Draft DPDPA Rules define informed consent. However, the Draft DPDA Rules state that the notice given by the data fiduciary to the data principal must use simple, plain language to provide the data principal with a full and transparent account of the information necessary so that they can provide informed consent to process their personal data.
A stakeholder pointed out that consent is taken without much nuance or the option for choice or nuance. Indeed, consent is often presented in non-negotiable terms, creating power imbalances and undermining patient autonomy. Suggested solutions include instituting granular and revocable consent mechanisms. This point also emerged during the third roundtable, where it was highlighted that consenting to a medical procedure was different from consenting to data being used to train AI. When a consent form that a patient or caregiver is asked to sign gives the relevant information and no choice but to sign, it creates a severe power imbalance. Participants also emphasised that there was a need to assess if consent was being used as a tool to enable more data-sharing or a mechanism for citizens to be given other rights, such as the reasonable expectation that their medical information would not be used for commercial interests, especially to their own detriment, just because they signed a form. One suggested way to tackle this is for there to be greater demarcation of the aspects a person could consent to. This would give people more control over the various ways in which their data is used.
(iii) Data sharing with third parties
Discussions also focused on the concerns about sharing health data with third parties, especially if the data is transferred outside India. Data is/can be shared with tech companies and research organisations. So the discussions highlighted the regulations and norms governing how such data sharing occurs despite the fragmented regulations. For instance:
- Indian Council of Medical Research (ICMR) Ethical guidelines for application of Artificial Intelligence in Biomedical Research and Healthcare mandate strict protocols for sharing health data, but these are not binding. They state that the sharing of health data by medical institutions with tech companies and collaborators, must go through the ICMR and Health Ministry’s Screening Committee. This committee has strict guidelines on how and how much data can be shared and how it needs to be shared. The process also requires that all PII is removed and only 10 percent of the total data is permitted to be shared with any collaborator outside of any Indian jurisdiction.
- Companies working internationally have to comply with global standards like the GDPR and HIPAA, highlighting the gaps in India’s domestic framework which leaves the companies uncertain of which regulations to comply with. There is a need to balance the interests of startups that require more data and better longitudinal health records, and the need for strong data protection, data minimisation, and storage limitation.
(iv) Inadequate healthcare infrastructure
With respect to the implementation challenges associated with health data laws, participants noted that, currently, the Indian healthcare infrastructure is not up to the mark. Moreover, smaller and rural hospitals are not yet on board with health digitisation and may not be able to comply with additional rules and responsibilities. In terms of capacity as well, smaller healthcare facilities lack the resources to implement and comply with complex regulations.
3.2. Regulatory challenges
Significant time was spent on discussing the regulatory challenges and deficiencies in India’s healthcare infrastructure. The discussion primarily revolved around the following points:
(i) State vs. central jurisdiction
Under the Constitutional Scheme, legislative responsibilities for various subjects are demarcated between the centre and the states, and are sometimes shared between them. The topics of public health and sanitation, hospitals, and dispensaries fall under the state list set out in the Seventh Schedule of the Constitution. This means that state governments have the primary responsibility of framing and implementing laws on these subjects. Under this, local governance institutions, namely local bodies, also play an important role in discharging public health responsibilities.
(ii) Do we bring back DISHA?
During the conversation about the need for the health data regulation, participants brought up that there had been an earlier push for a health data law in the form of DISHA, 2017. But this was later abandoned. DISHA aimed to set up digital health authorities at the national and state levels to implement privacy and security measures for digital health data and create a mechanism for the exchange of electronic health data. Another concern that arose with respect to having a central health data legislation was that, as health is a state subject, there could be confusion about having a separate, centralised regulatory body to oversee how the data is being handled. This might come with a lack of clarity on who would address what, or which ministry (in the state or central government) would handle the redressal mechanism.
3.3. Are the existing guidelines enough?
Participants highlighted that enacting a separate law to regulate digital health would be challenging, considering that the DPDPA took seven years to be enacted, the rules are yet to be drafted, and the Data Protection Board has not been established. Hence, any new legislation would take significant resources, including manpower and time.
In this context, there were discussions acknowledging that although the DPDPA does not currently regulate health data, there are other forms of regulation and policies that are prescribed for specific types of interventions when it comes to health data; for example, the Telemedicine Practice Guidelines, 2020, and the Medical Council of India Rules. These are binding on medical practitioners, with penalties for non-conforming, such as the revoking of medical licenses. Similarly the ICMR guidelines on the use of data in biomedical research include specific transparency measures, and existing obligations on health data collectors that would work irrespective of the lack of distinction between sensitive personal data and personal data under the DPDPA.
However, another participant rightly pointed out that the ICMR guidelines and the policies from the Ministry of Health and Family Welfare are not binding. Similarly, regulations like the Telemedicine Practice Guidelines and Indian Medical Council Act are only applicable to medical practitioners. There are now a number of companies that collect and process a lot of health data; they are not covered by these regulations. Although there are multiple regulations on healthcare and pharma, none of them cover or govern technology. The only relevant one is the Telemedicine Practice Guidelines, which say that AI cannot advise any patient; it can only provide support.
Chapter 4. Recommendations
Several key points were raised and highlighted during the three roundtables. There were also a few suggestions for how to regulate the digital health sphere. These recommendations and points can be classified into short-term measures and long-term measures.
4.1. Short-term measures
We propose two short-term measures, as follows:
(i) Make amendments to the DPDPA Introduce sector-specific provisions for health data within the existing framework. The provisions should include guidelines for informed consent, data security, and grievance redressal.
(ii) Capacity-building Provide training for healthcare providers and data fiduciaries on data security and compliance.
4.2. Long-term measures
We offer six long-term measures, as follows:
(i) Standalone legislation Enact a dedicated health data law that
- Defines health data and its scope; ● Establishes a regulatory authority for oversight; and
- Includes provisions for data sharing, security, and patient rights.
(ii) National Digital Health Authority
Establish a central authority, similar to the EU’s Health Data Space, to regulate and monitor digital health initiatives.
(ii) Cross-sectoral coordination
Develop mechanisms to align central and state policies and ensure seamless implementation.
(v) Technological safeguards
Encourage the development of AI-specific policies and guidelines to address the ethics of using health data.
(vi) Stringent measures to address data breaches
Increase the trust of people by addressing data breaches, and fostering proactive dialogue between patients, medical community, government and civil society. Reduce the exemption for data processing, such as that granted to the state for healthcare
Conclusion
The roundtable discussions highlighted the fragmented nature of the digital health sphere, and the issues that emanate from such a fractured polity. Considering the variations in the healthcare infrastructure and budget allocation across different states, the feasibility of enacting a central digital health law requires more in-depth research. The existing laws governing the offline/legacy health space also need careful examination to understand whether amendments to these laws are sufficient to regulate the digital health space.
The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development
With research assistance by Anuj Singh
I. Background
On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document.
CIS’ comments are inline with the submission guidelines, we have provided both comments and suggestions based on the headings and text provided in the report.
II. Governance of AI
The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now “perform complex tasks without active human control or supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the OECD AI principles which this report draws inspiration from.
A. AI Governance Principles
A proposed list of AI Governance principles (with their explanations) is given below.
While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken, to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.
B. Considerations to operationalise the principles
1. Examining AI systems using a lifecycle approach
The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. Chen et al. (2023), De Silva and Alahakoon (2022)) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others (Ng et al. (2022) have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s Responsible AI Playbook follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.
2. Taking an ecosystem-view of AI actors
While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body, it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static.
While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.
3. Leveraging technology for governance
While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the Bhumi programme in Andhra Pradesh that used blockchain for land records, however this also led to the weakening of local institutions, and also led to exclusion of marginalised people Kshetri (2021). It was also stated that there was a need to strengthen existing institutions before using a technological measure.
Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack tough exams in the USA. In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of confidential information.
Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.
III. GAP ANALYSIS
A. The need to enable effective compliance and enforcement of existing laws.
The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India are robust enough to deal with AI, and the change in technology and technology developers.
There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.
We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to lack of clarity.
Below we are adding our comments and suggestions certain subsections in this section on The need to enable effective compliance and enforcement of existing laws
1. Intellectual property rights
a. Training models on copyrighted data and liability in case of infringement
While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making non-expressive uses of work, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.
This report states “The Indian law permits a very closed list of activities in using copyrighted data without permission that do not constitute an infringement. Accordingly, it is clear that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act, 1957 is extremely narrow. Commercial research is not exempted; not-for-profit 10 institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete with the existing copyrighted work is exempted. “
Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research.
If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate. Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability
b. Copyrightability of work generated by using foundation models
We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.
C. The need for a whole-of-government approach.
While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms. The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on AI in healthcare indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.
Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.
IV. Recommendations
1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.
We would like to reiterate the earlier section and highlight the importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.
2.To develop a systems-level understanding of India’s AI ecosystem, MeitY should establish, and administratively house, a Technical Secretariat to serve as a technical advisory body and coordination focal point for the Committee/ Group.
The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making. The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making, there is also risk of regulatory capture. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.
3.To build evidence on actual risks and to inform harm mitigation, the Technical Secretariat should establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes.
The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.
It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual being denied health insurance because of AI bias. With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to The OECD AI Incidents Monitor.
4. To enhance transparency and governance across the AI ecosystem, the Technical Secretariat should engage the industry to drive voluntary commitments on transparency across the overall AI ecosystem and on baseline commitments for high capability/widely deployed systems.
It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.
While the transparency measures listed will ensure better understanding of processes of AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on AI data supply chain & auditability and healthcare in India, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.

5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.
It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), stated that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries.
We would also like to highlight that during the consultations on the DIA it was proposed to replace the Information Technology Act 2000. It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.
The Centre for Internet and Society’s comments and feedback to the: Digital Personal Data Protection Rules 2025
Rule 3 - Notice given by data fiduciary to data principal - Under Section 5(2) of the DPDP Act, when the personal data of the data principal has been processed before the commencement of the Act, then the data fiduciary is required to give notice to the data principal as soon as reasonably practicable. However, the Rules fail to specify what is meant by reasonably practicable. The timeline for a notice in such circumstances is unclear.
- In addition, under Rule 3(a) the phrase “be presented and be understandable independently” is ambiguous. It is not clear whether the consent notice has to be presented independently of any other information or whether it only needs to be independently understandable and can be presented along with other information.
- In addition to this we suggest that the need for “privacy by design” mentioned in the earlier drafts is brought back, with the focus on preventing deceptive design practices (dark patterns) being used while collecting data.
Rule 4 - Registration and obligations of Consent Manager- The concept of independent consent managers, similar to account aggregators in the financial sector, and consent manager platforms in the EU is a positive step. However, the Act and the Rules need to flesh out the interplay between the Data Fiduciary and the Consent Managers in a more detailed manner, for example, how does the data fiduciary know if a data principal is using a consent manager, and under what circumstances can the data fiduciary bypass the consent manager, what is the penalty/consequence, etc.
Rule 6 - Reasonable security safeguards - While we appreciate the guidance provided in terms of the measures for security such as “encryption, obfuscation or masking or the use of virtual tokens”, it would also be good to refer to the SPDI Rules and include the example of the The international Standard IS/ISO/IEC 27001 on Information Technology - Security Techniques - Information Security Management System as an illustration to guide data fiduciaries.
Rule 7 - Intimation of personal data breach - As per the Rules, the data fiduciary on becoming aware of any personal data breach is required to notify the data principal and the Data Protection Board without delay; a plain reading of this Rule suggests that data fiduciary has to report the breach almost immediately, and this could be a practical challenge. Further, the absence of any threshold (materiality, gravity of the breach, etc) for notifying the data principal means that the data fiduciary will have to inform the data principal about even an isolated data breach which may not have an impact on the data principal. In this context, we recommend the Rule be amended to state that the data fiduciary should be required to inform the Data Protection Board about every data breach, however the data principal should be informed depending on the gravity and materiality of the breach and when it is likely to result in high risk to the data principal.
- Whilst the Rules have provisions for intimation of data breach, there is no specific provision requiring the Data Fiduciary to take all steps necessary to ensure that the Data Fiduciary has taken all necessary measures to mitigate the risk arising out of the said breach. Although there is an obligation to report any such measures to the Data Principal (Rule 7(1)(c)) as well as to the DPBI (Rule 7(2)(b)(iii)), there is no positive obligation imposed on the Data Fiduciary to take any such mitigation measures. The Rules and the Act merely presume that the Data Fiduciary would take mitigation measures, perhaps that is the reason why there are notification requirements for such breach, however the Rules and the Act do not put any positive obligation on the Data Fiduciary to actually implement such measures. This would lead to a situation where a Data Fiduciary may not take any measures to mitigate the risks arising out of the data breach, and be in compliance with its legal obligations by merely notifying the Data Principal as well as the DPBI that no measures have been taken to mitigate the risks arising from the data breach. In addition, the SPDI Rules state that in an event of a breach the body corporate is required to demonstrate that they had implemented reasonable security standards. This provision could be incorporated in this Rule to emphasize on the need to implement robust security standards which is one of the ways to curb data breaches from happening, and ensure that there is a protocol to mitigate the breach.
Rule 10 - Verifiable consent for processing of personal data of child or of person with disability who has a lawful guardian - The two mechanisms provided under the Rules to verify the age and identity of parents pre-suppose a high degree of digital literacy on the part of the parents. They may either give or refuse consent without thinking too much about the consequences arising out of giving or not giving consent. As there is always a risk of individuals not providing the correct information regarding their age or their relationship with the child, platforms may have to verify every user’s age; thereby preventing users from accessing the platform anonymously. Further, there is also a risk of data maximisation of personal data rather than data minimisation; i.e parents may be required to provide far more information than required to prove their identity. One recommendation/suggestion that we propose is to remove the processing of children's personal data from the ambit of this law, and instead create a separate standalone legislation dealing with children’s digital rights. Another important issue to highlight here is the importance of the Digital Protection Board and its capacity to levy fines and impose strictures on the platforms. We have seen from examples from other countries that platforms are forced to redesign and provide for better privacy and data protection mechanisms when the regulator steps in and imposes high penalties
Rule 12 - Additional obligations of Significant Data Fiduciary - The Rules do not clarify which entities will be considered as a Significant Data Fiduciary, leaving that to the government notifications. This creates uncertainty for data fiduciaries, especially smaller organisations that might not be able to set up the mechanisms and people for conducting data protection impact assessment, and auditing. The Rule provides that SDFs will have to conduct an annual Data Protection Impact Assessment. While this is a step in the right direction, the Rules are currently silent on the granularity of the DPIA. Similarly for “audit” the Rules do not clarify what type of audit is needed and what the parameters are. It is therefore imperative that the government notifies the level of details that the DPIA and the audit need to go into in order to ensure that the SDFs actually address issues where their data governance practices are lacking and not use the DPIA as a whitewashing tactic.There is also a need to reduce some of the ambiguity with regards to the parameters, and responsibilities in order to make it easier for startups and smaller players to comply with the regulations. In addition, while there is a need to protect data and increase responsibility on organisations collecting sensitive data or large volumes of data, there is a need to look beyond compliance and look at ways that preserve the rights of the data principal. Hence significant data fiduciaries should also be given the added responsibility of collecting explicit consent from the data principal, and also have easier access for correction of data, grievance redressal and withdrawal of consent.
Rule 14 - Processing of personal data outside India - As per section 16 of the Act the government could, by notification, restrict the transfer of data to specific countries as notified. This system of a negative list envisaged under the Act appears to have been diluted somewhat by the use of the phrase “any foreign State” under the Rules. This ambiguity should be addressed and the language in the Rules may be altered to bring it in line with the Act. Further, the rules also appear to be ultra vires to the Act. As per the DPDP Act, personal data could be shared to outside India, except to countries which were on the negative list, however, the dilution of the provision through the rules appears to have now created a white list of countries; i.e. permissible list of countries to which data can be transferred.
Rule 15 Exemption from Act for research, archiving or statistical purposes- While creating an exception for research and statistical purposes is an understandable objective, the current wording of the provision is vague and subject to mischief. The objective behind the provision is to ensure that research activities are not hindered due to the requirements of taking consent, etc. as required under the Act. However the way the provision is currently drafted, it could be argued that a research lab or a research centre established by a large company, for e.g. Google, Meta, etc. could also seek exemptions from the provisions of this Act for conducting “research”. The research conducted may not be shared with the public in general and may be used by the companies that funded/established the research centre. Therefore there should be further conditions attached to this provision, that would keep such research centers outside the purview of the exemption. Conditions such as making the results of the research publicly available, public interest, etc. could be considered for this purpose.
Rule 22 - Calling for Information from data fiduciary or intermediary - This rule read with the seventh schedule appears to dilute the data minimisation and purpose limitation provisions provided for in the Act. The wide ambit of powers appears to be in contravention of the Supreme Court judgement in the Puttaswamy case, which places certain restrictions on the government while collecting personal data. This “omnibus” provision flouts guardrails like necessity and proportionality that are important to safeguard the fundamental right to privacy.
It should be clarified whether this rule is merely an enabling provision to facilitate sharing of information, and only designated competent authorities as per law can avail of this provision. Need for Confidentiality
Additionally, the rule mandates that the government may “require the Data Fiduciary or intermediary to not disclose” any request for information made under the Act. There is no requirement of confidentiality indicated in the governing section, i.e. section 36, from which Rule 22 derives its authority. Talking about the avoidance of secrecy in government business, the Supreme Court in the State of U.P. v. Raj Narain, (1975) 4 SCC 428 has held that
“In a government of responsibility like ours, where all the agents of the public must be responsible for their conduct, there can but few secrets. The people of this country have a right to know every public act, everything, that is done in a public way, by their public functionaries. They are entitled to know the particulars of every public transaction in all its bearing. The right to know, which is derived from the concept of freedom of speech, though not absolute, is a factor which should make one wary, when secrecy is claimed for transactions which can, at any rate, have no repercussions on public security (2). To cover with [a] veil [of] secrecy the common routine business, is not in the interest of the public. Such secrecy can seldom be legitimately desired. It is generally desired for the purpose of parties and politics or personal self-interest or bureaucratic routine. The responsibility of officials to explain and to justify their acts is the chief safeguard against oppression and corruption.”
In order to ensure that state interests are also protected, there may be an enabling provision whereby in certain instances confidentiality may be maintained, but there has to be a supervisory mechanism whereby such action may be judged on the anvil of legal propriety.
Education, Epistemologies and AI: Understanding the role of Generative AI in Education
Emotional Contagion: Theorising the Role of Affect in COVID-19 Information Disorder
By incorporating theoretical frameworks from psychology, sociology, and communication studies, we reveal the complex foundations of both the creation and consumption of misinformation. From this research, fear emerged as the predominant emotional driver in both the creation and consumption of misinformation, demonstrating how negative affective responses frequently override rational analysis during crises. Our findings suggest that effective interventions must address these affective dimensions through tailored digital literacy programs, diversified information sources on online platforms, and expanded multimodal misinformation research opportunities in India.
Click to download the research paper
The Cost of Free Basics in India: Does Facebook's 'walled garden' reduce or reinforce digital inequalities?
In 2015, Facebook introduced internet.org in India and it faced a lot of criticism. The programme was relaunched as the Free Basics programme, ostensibly to provide, free of cost, access to the Internet to the economically deprived section of society. The content, i.e. websites, were pre-selected by Facebook and was provided by third-party providers. Later, Telecom Regulatory Authority of India (TRAI) ruled in favor of net neutrality, banning the program in India. A crucial conversation in this debate was also about whether the Free Basics program was going to actually be helpful for those it set out to support.
This paper examines Facebook’s Free Basics programme and its perceived role in bridging digital divides, in the context of India, where it has been widely debated, criticized and finally banned in a ruling from Telecom Regulatory Authority of India (TRAI). While the debate on the Free Basics programme has largely been embroiled around the principles of network neutrality, this paper will try to examine it from an ICT4D perspective, embedding the discussion around key development paradigms.
This essay begins by introducing the Free Basics programme in India and the associated proceedings, following which existing literature is reviewed to explore the concept of development, the perceived role of ICT in development, thus laying the scope of this discussion. The essay then examines the question of whether the Free Basics programme reduces or reinforces digital inequality by looking at 3 development paradigms: (1) Construction of knowledge, power structures and virtual colonization in the Free Basics Programme, (2) A sub-internet of the marginalized: looking at second level digital divides and (3) the Capabilities Approach and premise of connectivity as a source of equality and freedom.
The essay concludes with a view that the need for digital access should be viewed as a subset of overall contextual development as opposed to programs unto themselves and taking purely techno-solutionist approaches. There is a requirement for effective needs identification as part of ICT4D research to locate the users at the center and not at the periphery of the discussions. Lastly, policymakers should look into the addressal of more basic concerns like that of access and connectivity and not just on solutions which can be claimed as “quick-wins” in policy implementation.
Mapping the Legal and Regulatory Frameworks of the Ad-Tech Ecosystem in India
In this paper, we try to map the legal and regulatory framework dealing with Advertising Technology (Adtech) in India as well as a few other leading jurisdictions. Our analysis is divided into three main parts, the first being general consumer regulations, which apply to all advertising irrespective of the media – to ensure that advertisements are not false or misleading and do not violate any laws of the country. This part also covers the consumer laws which are specific to malpractices in the technology sector such as Dark Patterns, Influencer based advertising, etc.
The second part of the paper covers data protection laws in India and how they are relevant for the Adtech industry. The Adtech industry requires and is based on the collection and processing of large amounts of data from the users. It is therefore important to discuss the data protection and consent requirements that have been laid out in the spate of recent data protection regulations, which have the potential to severely impact the Adtech industry.
The last part of the paper covers the competition angle of the Adtech industry. Like with social media intermediaries, the Adtech industry in the world is also dominated by two or three players and such a scenario always lends itself easily to anti-competitive practices. It is therefore imperative to examine the competition law framework to see whether the laws as they exist are robust enough to deal with any possible anti competitive practices that may be prevalent in the Adtech sector.
The research was reviewed by Pallavi Bedi, it can be accessed here.

