<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 1 to 15.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-comments-and-feedback-to-digital-personal-data-protection-rules-2025"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/submission-to-igf-2025-call-for-thematic-inputs"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/privacy-policy-framework-for-indian-metal-health-apps"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/draft-circular-on-digital-lending-2013-transparency-in-aggregation-of-loan-products-from-multiple-lenders"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/comments-to-the-draft-digital-competition-bill"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/consultation-on-gendered-information-disorder-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/information-disorders-and-their-regulation"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/about/reports/fcra-july-september-2023"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/files/digital-tools-farmers-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-comments-and-feedback-to-digital-personal-data-protection-rules-2025">
    <title>The Centre for Internet and Society’s comments and feedback to the: Digital Personal Data Protection Rules 2025</title>
    <link>https://cis-india.org/internet-governance/blog/cis-comments-and-feedback-to-digital-personal-data-protection-rules-2025</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society (CIS) submitted its comments and feedback to the Digital Personal Data Protection Rules 2025 initiated by the Indian government.&lt;/b&gt;
        &lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 3 - Notice given by data fiduciary to data principal&lt;/span&gt;&lt;/b&gt; - Under Section 5(2) of the DPDP Act, when the personal data of the data principal has been processed before the commencement of the Act, then the data fiduciary is required to give notice to the data principal as soon as reasonably practicable. However, the Rules fail to specify what is meant by reasonably practicable. The timeline for a notice in such circumstances is unclear.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In addition, under Rule 3(a) the phrase “be presented and be understandable independently” is ambiguous. It is not clear whether the consent notice has to be presented independently of any other information or whether it only needs to be independently understandable and can be presented along with other information. &lt;/li&gt;
&lt;li&gt;In addition to this we suggest that the need for “privacy by design” mentioned in the earlier drafts is brought back, with the focus on preventing deceptive design practices (dark patterns)  being used while collecting data. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;br /&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 4 - Registration and obligations of Consent Manager&lt;/span&gt;&lt;/b&gt;- The concept of independent consent managers, similar to account aggregators in the financial sector, and consent manager platforms in the EU is a positive step. However, the Act and the Rules need to flesh out the interplay between the Data Fiduciary and the Consent Managers in a more detailed manner, for example, how does the data fiduciary know if a data principal is using a consent manager, and under what circumstances can the data fiduciary bypass the consent manager, what is the penalty/consequence, etc.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 6 - Reasonable security safeguards&lt;/span&gt;&lt;/b&gt; - While we appreciate the guidance provided in terms of the measures for security such as “encryption, obfuscation or masking or the use of virtual tokens”, it would also be good to refer to the SPDI Rules and include the example of the The international Standard IS/ISO/IEC 27001 on Information Technology - Security Techniques - Information Security Management System as an illustration to guide data fiduciaries.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 7 - Intimation of personal data breach&lt;/span&gt;&lt;/b&gt; - As per the Rules, the data fiduciary on becoming aware of any personal data breach is required to notify the data principal and the Data Protection Board without delay; a plain reading of this Rule suggests that data fiduciary has to report the breach almost immediately, and this could be a practical challenge. Further, the absence of any threshold (materiality, gravity of the breach, etc) for notifying the data principal means that the data fiduciary will have to inform the data principal about even an isolated data breach which may not have an impact on the data principal. In this context, we recommend the Rule be amended to state that the data fiduciary should be required to inform the Data Protection Board about every data breach, however the data principal should be informed depending on the gravity and materiality of the breach and when it is likely to result in high risk to the data principal.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Whilst the Rules have provisions for intimation of data breach, there is no specific provision requiring the Data Fiduciary to take all steps necessary to ensure that the Data Fiduciary has taken all necessary measures to mitigate the risk arising out of the said breach. Although there is an obligation to report any such measures to the Data Principal (Rule 7(1)(c)) as well as to the DPBI (Rule 7(2)(b)(iii)), there is no positive obligation imposed on the Data Fiduciary to take any such mitigation measures. The Rules and the Act merely presume that the Data Fiduciary would take mitigation measures, perhaps that is the reason why there are notification requirements for such breach, however the Rules and the Act do not put any positive obligation on the Data Fiduciary to actually implement such measures. This would lead to a situation where a Data Fiduciary may not take any measures to mitigate the risks arising out of the data breach, and be in compliance with its legal obligations by merely notifying the Data Principal as well as the DPBI that no measures have been taken to mitigate the risks arising from the data breach. In addition, the SPDI Rules state that in an event of a breach the body corporate is required to demonstrate that they had implemented reasonable security standards. This provision could be incorporated in this Rule to emphasize on the need to implement robust security standards which is one of the ways to curb data breaches from happening, and ensure that there is a protocol to mitigate the breach.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 10 - Verifiable consent for processing of personal data of child or of person with disability who has a lawful guardian&lt;/span&gt;&lt;/b&gt; - The two mechanisms provided under the Rules to verify the age and identity of parents pre-suppose a high degree of digital literacy on the part of the parents. They may either give or refuse consent without thinking too much about the consequences arising out of giving or not giving consent. As there is always a risk of individuals not providing the correct information regarding their age or their relationship with the child, platforms may have to verify every user’s age; thereby preventing users from accessing the platform anonymously. Further, there is also a risk of data maximisation of personal data rather than data minimisation; i.e parents may be required to provide far more information than required to prove their identity. One recommendation/suggestion that we propose is to remove the processing of children's personal data from the ambit of this law, and instead create a separate standalone legislation dealing with children’s digital rights. Another important issue to highlight here is the importance of the Digital Protection Board and its capacity to levy fines and impose strictures on the platforms. We have seen from examples from other countries that platforms are forced to redesign and provide for better privacy and data protection mechanisms when the regulator steps in and imposes high penalties&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 12 - Additional obligations of Significant Data Fiduciary&lt;/span&gt;&lt;/b&gt; - The Rules do not clarify which entities will be considered as a Significant Data Fiduciary, leaving that to the government notifications. This creates uncertainty for data fiduciaries, especially smaller organisations that might not be able to set up the mechanisms and people for conducting data protection impact assessment, and auditing. The Rule provides that SDFs will have to conduct an annual Data Protection Impact Assessment. While this is a step in the right direction, the Rules are currently silent on the granularity of the DPIA. Similarly for “audit” the Rules do not clarify what type of audit is needed and what the parameters are. It is therefore imperative that the government notifies the level of details that the DPIA and the audit need to go into in order to ensure that the SDFs actually address issues where their data governance practices are lacking and not use the DPIA as a whitewashing tactic.There is also a  need to reduce some of the ambiguity with regards to the parameters, and responsibilities in order to make it easier for startups and smaller players to comply with the regulations.  In addition, while there is a need to protect data and increase responsibility on organisations collecting sensitive data or large volumes of data, there is a need to look beyond compliance and look at ways that preserve the rights of the data principal. Hence significant data fiduciaries should also be given the added responsibility of collecting explicit consent from the data principal, and also have easier access for correction of data, grievance redressal and withdrawal of consent.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 14 - Processing of personal data outside India&lt;/span&gt;&lt;/b&gt; - As per section 16 of the Act the government could, by notification, restrict the transfer of data to specific countries as notified. This system of a negative list envisaged under the Act appears to have been diluted somewhat by the use of the phrase “any foreign State” under the Rules. This ambiguity should be addressed and the language in the Rules may be altered to bring it in line with the Act. Further, the rules also appear to be ultra vires to the Act. As per the DPDP Act, personal data could be shared to outside India, except to countries which were on the negative list, however, the dilution of the provision through the rules appears to have now created a white list of countries; i.e. permissible list of countries to which data can be transferred.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 15 Exemption from Act for research, archiving or statistical purposes&lt;/span&gt;- &lt;/b&gt;While creating an exception for research and statistical purposes is an understandable objective, the current wording of the provision is vague and subject to mischief. The objective behind the provision is to ensure that research activities are not hindered due to the requirements of taking consent, etc. as required under the Act. However the way the provision is currently drafted, it could be argued that a research lab or a research centre established by a large company, for e.g. Google, Meta, etc. could also seek exemptions from the provisions of this Act for conducting “research”. The research conducted may not be shared with the public in general and may be used by the companies that funded/established the research centre. Therefore there should be further conditions attached to this provision, that would keep such research centers outside the purview of the exemption. Conditions such as making the results of the research publicly available, public interest, etc. could be considered for this purpose.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&lt;span style="text-decoration: underline;"&gt;Rule 22 - Calling for Information from data fiduciary or intermediary&lt;/span&gt; - &lt;/b&gt;This rule read with the seventh schedule appears to dilute the data minimisation and purpose limitation provisions provided for in the Act. The wide ambit of powers appears to be in contravention of the Supreme Court judgement in the Puttaswamy case, which places certain restrictions on the government while collecting personal data. This “omnibus” provision flouts guardrails like necessity and proportionality that are important to safeguard the fundamental right to privacy.&lt;/p&gt;
&lt;p&gt;It should be clarified whether this rule is merely an enabling provision to facilitate sharing of information, and only designated competent authorities as per law can avail of this provision. &lt;span style="text-decoration: underline;"&gt;Need for Confidentiality &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Additionally, the rule mandates that the government may “require the Data Fiduciary or intermediary to not disclose” any request for information made under the Act. There is no requirement of confidentiality indicated in the governing section, i.e. section 36, from which Rule 22 derives its authority. Talking about the avoidance of secrecy in government business, the Supreme Court in the State of U.P. v. Raj Narain, (1975) 4 SCC 428 has held that &lt;br /&gt; &lt;i&gt;“In a government of responsibility like ours, where all the agents of the public must be responsible for their conduct, there can but few secrets. The people of this country have a right to know every public act, everything, that is done in a public way, by their public functionaries. They are entitled to know the particulars of every public transaction in all its bearing. The right to know, which is derived from the concept of freedom of speech, though not absolute, is a factor which should make one wary, when secrecy is claimed for transactions which can, at any rate, have no repercussions on public security (2). To cover with [a] veil [of] secrecy the common routine business, is not in the interest of the public. Such secrecy can seldom be legitimately desired. It is generally desired for the purpose of parties and politics or personal self-interest or bureaucratic routine. The responsibility of officials to explain and to justify their acts is the chief safeguard against oppression and corruption.” &lt;/i&gt;&lt;br /&gt; In order to ensure that state interests are also protected, there may be an enabling provision whereby in certain instances confidentiality may be maintained, but there has to be a supervisory mechanism whereby such action may be judged on the anvil of legal propriety.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-comments-and-feedback-to-digital-personal-data-protection-rules-2025'&gt;https://cis-india.org/internet-governance/blog/cis-comments-and-feedback-to-digital-personal-data-protection-rules-2025&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Pallavi Bedi, Vipul Kharbanda, Shweta Mohandas, Anubha Sinha and Isha Suri</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Data Management</dc:subject>
    

   <dc:date>2025-03-06T02:06:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development">
    <title>The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development</title>
    <link>https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society (CIS) submitted its comments and recommendations on the Report on AI Governance Guidelines Development.&lt;/b&gt;
        
&lt;p&gt;With research assistance by Anuj Singh&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;I. Background&lt;/h2&gt;
&lt;p&gt;On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document. &lt;br /&gt;&lt;br /&gt;CIS’ comments are inline with the submission guidelines,&amp;nbsp; we have provided both comments and suggestions based on the headings and text provided in the report.&lt;/p&gt;
&lt;h2&gt;II. Governance of AI&lt;/h2&gt;
&lt;p&gt;The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now &lt;strong&gt;“&lt;/strong&gt;perform complex tasks without active human control or&amp;nbsp; supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the &lt;a href="https://oecd.ai/en/dashboards/ai-principles/P6"&gt;OECD AI principles &lt;/a&gt;which this report draws inspiration from.&lt;/p&gt;
&lt;h3&gt;A. AI Governance Principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;A proposed list of AI Governance principles (with their explanations) is given&amp;nbsp; below. &lt;/strong&gt;&lt;br /&gt;While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in&amp;nbsp; mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken,&amp;nbsp; to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.&lt;/p&gt;
&lt;h3&gt;B. Considerations to operationalise the principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;1. Examining AI systems using a lifecycle approach &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. &lt;a href="https://www.sciencedirect.com/org/science/article/pii/S1438887123002224"&gt;Chen et al. (2023&lt;/a&gt;), &lt;a href="https://www.cell.com/patterns/pdfExtended/S2666-3899(22)00074-5"&gt;De Silva and Alahakoon (2022)&lt;/a&gt;) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others &lt;a href="https://www.sciencedirect.com/science/article/pii/S2666389922000745"&gt;(Ng et al. (2022)&lt;/a&gt; have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s&amp;nbsp; &lt;a href="https://nasscom.in/ai/pdf/the-developer%27s-playbook-for-responsible-ai-in-india.pdf"&gt;Responsible AI Playbook&lt;/a&gt; follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Taking an ecosystem-view of AI actors &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body,&amp;nbsp; it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static. &lt;br /&gt;&lt;br /&gt;While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Leveraging technology for governance &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the&lt;a href="https://www.techinasia.com/indian-state-running-pilot-put-land-records-blockchain"&gt; Bhumi programme&lt;/a&gt; in Andhra Pradesh that used blockchain for land records,&amp;nbsp; however this also led to the weakening of local institutions, and also led to exclusion of marginalised people &lt;a href="https://www.tandfonline.com/doi/full/10.1080/01436597.2021.2013116"&gt;Kshetri (2021)&lt;/a&gt;. It was also stated that there was a need to strengthen existing institutions before using a technological measure. &lt;br /&gt; &lt;strong&gt;&lt;br /&gt; &lt;/strong&gt;Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack &lt;a href="https://indiaai.gov.in/news/chatgpt-fails-to-clear-the-prestigious-civil-service-examination"&gt;tough exams in the USA.&lt;/a&gt; In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of &lt;a href="https://www.thehindu.com/sci-tech/technology/indias-finance-ministry-asks-employees-to-avoid-ai-tools-like-chatgpt-deepseek/article69183180.ece"&gt;confidential information.&lt;/a&gt; &lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.&lt;/p&gt;
&lt;h2&gt;III. GAP ANALYSIS&lt;/h2&gt;
&lt;h3&gt;A. The need to enable effective compliance and enforcement of existing laws.&lt;/h3&gt;
&lt;p&gt;The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India&amp;nbsp; are robust enough to deal with AI, and the change in technology and technology developers.&lt;/p&gt;
&lt;p&gt;There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.&lt;/p&gt;
&lt;p&gt;We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to &lt;a href="https://www.kas.de/documents/d/politikdialog-asien/panorama_2024-1-107-122"&gt;lack of clarity.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Below we are adding our comments and suggestions certain subsections in this section on &lt;strong&gt;The need to enable effective compliance and enforcement of existing laws &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;1. Intellectual property rights&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;a. Training models on copyrighted data and liability in case of&amp;nbsp; infringement&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making &lt;a href="https://spicyip.com/2019/08/should-indian-copyright-law-prevent-text-and-data-mining.html"&gt;non-expressive uses of work&lt;/a&gt;, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This report states “The Indian law permits a very closed list of activities in using copyrighted data&amp;nbsp; without permission that do not constitute an infringement. Accordingly, it is clear&amp;nbsp; that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act,&amp;nbsp; 1957 is extremely narrow. Commercial research is not exempted; not-for-profit &lt;sup&gt;10&lt;/sup&gt; institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete&amp;nbsp; with the existing copyrighted work is exempted. “ &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research. &lt;br /&gt; &lt;br /&gt; If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate.&amp;nbsp; Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;b. Copyrightability of work generated by using foundation models &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.&lt;/p&gt;
&lt;h3&gt;C. The need for a whole-of-government approach.&lt;/h3&gt;
&lt;p&gt;While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms.&amp;nbsp; The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"&gt;AI in healthcare&lt;/a&gt; indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.&lt;/p&gt;
&lt;p&gt;Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.&lt;/p&gt;
&lt;h2&gt;IV. Recommendations&lt;/h2&gt;
&lt;h3&gt;1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.&lt;/h3&gt;
&lt;p&gt;We would like to reiterate the earlier section and highlight the&amp;nbsp; importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.&lt;/p&gt;
&lt;h3&gt;2.To develop a systems-level understanding of India’s AI&amp;nbsp; ecosystem, MeitY should establish, and administratively house,&amp;nbsp; a Technical Secretariat to serve as a technical advisory body&amp;nbsp; and coordination focal point for the Committee/ Group.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making.&amp;nbsp; The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making,&amp;nbsp; there is also risk of &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4931927"&gt;regulatory capture&lt;/a&gt;. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.&lt;/p&gt;
&lt;h3&gt;3.To build evidence on actual risks and to inform harm mitigation,&amp;nbsp; the Technical Secretariat should establish, house, and operate&amp;nbsp; an AI incident database as a repository of problems&amp;nbsp; experienced in the real world that should guide responses to&amp;nbsp; mitigate or avoid repeated bad outcomes.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt; &lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.&lt;/p&gt;
&lt;p&gt;It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual&amp;nbsp; being denied health insurance because of AI bias.&amp;nbsp; With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical&amp;nbsp; Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to &lt;a href="https://oecd.ai/en/incidents"&gt;The OECD AI Incidents Monitor&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;4. To enhance transparency and governance across the AI&amp;nbsp; ecosystem, the Technical Secretariat should engage the&amp;nbsp; industry to drive voluntary commitments on transparency&amp;nbsp; across the overall AI ecosystem and on baseline commitments&amp;nbsp; for high capability/widely deployed systems.&lt;/h3&gt;
&lt;p&gt;It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.&lt;/p&gt;
&lt;p&gt;While the transparency measures listed will ensure better understanding of processes of&amp;nbsp; AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf"&gt;AI data supply chain &amp;amp; auditability and healthcare in India&lt;/a&gt;, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cis-india.org/home-images/AIGovernanceComments.png" alt="null" class="image-inline" title="AI Governance Comments" /&gt;&lt;/p&gt;
&lt;div class="visualClear"&gt;Source: CIS survey of professionals in AI and healthcare, January- April 2024. Medical professionals (n = 133); healthcare institutions (n = 162); technology companies (n = 171)&lt;/div&gt;
&lt;div class="visualClear"&gt;&amp;nbsp;&lt;/div&gt;
&lt;h3&gt;5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.&lt;/h3&gt;
&lt;p&gt;It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), &lt;a href="https://www.financialexpress.com/life/technology-will-not-rush-in-bringing-digital-india-act-meity-secretary-3708673/"&gt;stated&lt;/a&gt; that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries. &lt;br /&gt; &lt;br /&gt; We would also like to highlight that during the consultations on the DIA it was proposed to replace the &lt;a href="https://vidhilegalpolicy.in/blog/explained-the-digital-india-act-2023/"&gt;Information Technology Act 2000. &lt;/a&gt;It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.&lt;/p&gt;
&lt;h2&gt;&lt;/h2&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development'&gt;https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas, Amrita Sengupta and Anubha Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2025-03-06T06:32:45Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/submission-to-igf-2025-call-for-thematic-inputs">
    <title>Submission to IGF 2025 Call for Thematic Inputs</title>
    <link>https://cis-india.org/internet-governance/submission-to-igf-2025-call-for-thematic-inputs</link>
    <description>
        &lt;b&gt;Below are CIS's inputs submitted in response to the IGF 2025 Call for Thematic Inputs. They will inform the MAG’s discussions and assist them in determining the thematic priorities of the IGF 2025 programme.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;div class="views-field views-field-webform-submission-value-21"&gt;&lt;span class="field-content"&gt;On AI governance, AI risks and AI and data: &lt;br /&gt;
In the past many years, there has been rapid advances in the use of AI, 
most recently with the use of generative AI by end users and citizens. 
While questions of ethical use of AI, need for fairness, accountability 
and transparency are not new, the very rapid scale deployment of AI 
across different fields and also the easy use of AI across different 
users, have raised questions of exacerbated harms, infringement of 
copyright among others and a lot of focus currently is on developing 
governance for AI. Somewhere, there has been an acceptance of 
inevitability and almost omnipresence of AI across different contexts 
which has furthered deliberations around harnessing AI for good. We ask 
that while “AI for good” as an issue is being mainstreamed, it is most 
critical that there are avenues to discuss and understand areas where AI
 should not be used  (because of the outsized harms as compared to its 
benefits) or can be used through limited use of resources (given the 
wide ranging environmental impacts associated with AI and the resource 
intensive areas of computational power and data centers) and mechanisms 
to actualize that. This means that not only do we discuss AI governance 
in the context of where it is already deployed but also discuss 
conditions in which it should not be deployed. &lt;br /&gt;
There also needs to be greater and more specific regional conversations 
around data use for AI, especially for developing predictive AI systems,
 in sensitive settings such as healthcare and financial services. The 
challenges of using different data for different geographical settings 
have been well documented (consider for example training data from 
global north to develop and deploy AI diagnostics for a country in 
global south). There needs to be more specific conversations and 
transparency around data sources that are being used and how they can be
 both ethically sourced but also made contextually relevant. IGF can 
support these conversations by inviting specific inputs from the 
multi-stakeholder community on these specific issues. &lt;br /&gt;
&lt;br /&gt;
On digital identity: &lt;br /&gt;
There is growing interest in digital public infrastructure and its use 
for public service delivery and has potential to offer benefits and 
meaningful governance, if done well, as certain examples may suggest. 
However,  the implementation of digital ID systems for example, 
particularly when they are the sole means of identification, raises 
critical questions. Such systems must have robust legislative backing, 
including privacy and data protection frameworks, if not regulations, 
along with sufficient legislative and judicial oversight to ensure 
accountability. Concerns about mission creep—where systems initially 
introduced for specific purposes gradually expand to other uses without 
adequate scrutiny—highlight the need for clearly defined objectives and 
legal safeguards. These systems should proactively assess and mitigate 
risks and harms before implementation. Furthermore, given that many of 
these systems rely heavily on private companies with limited oversight, 
it is crucial to ensure meaningful community participation and 
accountability throughout the entire process to prioritize public 
interest over private gains. As we think about DPIs, we urge that its 
applicability, necessary infrastructural availability, assessment of 
risks are adequately considered and detailed through the themes and 
sessions at IGF. &lt;br /&gt;
&lt;br /&gt;
On data governance and youth engagement: &lt;br /&gt;
Personal data is being captured by different actors in an unprecedented 
manner, and at times without any legislative backing or grievance 
redressal mechanism.  With the advent of generative AI- there are also 
concerns regarding the extent to which data which is publicly available 
is being used and for what purposes. These concerns are exacerbated when
 children’s data is being used for generative AI purposes; in most cases
 without the knowledge or consent of the children. In an increasingly 
digitised world, how should children navigate the digital world; what is
 the appropriate age for children to access the internet and should 
there be age-gating, and if yes, how should that be implemented? What 
are the mechanisms to determine parental verification? As we have more 
and more young people online, it will be essential to define and develop
 frameworks for children’s use and experience of the internet, including
 having young people participate in these discussions. &lt;/span&gt;&lt;/div&gt;
&lt;div class="views-field views-field-nothing-2"&gt;&lt;span class="field-content"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class="views-field views-field-webform-submission-value-20"&gt;&lt;span class="field-content"&gt;We request some steps to be taken at the IGF annual meeting and during its intersessional work: &lt;br /&gt;
-Reduce duplication of processes and efforts when it comes to 
implementation of GDC and continue to look at existing arenas like the 
WSIS+20 and IGF. Greater coordination and collaboration among various UN
 bodies. &lt;br /&gt;
-Robust support for civil society participation at the IGF and other 
internet governance processes, especially so from Global South.  &lt;br /&gt;
-Creation of well resourced working groups that look through the GDC implementation work where relevant. &lt;/span&gt;&lt;/div&gt;
&lt;div class="views-field views-field-webform-submission-value-27"&gt;&lt;span class="field-content"&gt;Given
 the diverse set of stakeholders and the wide ranging nature topics 
discussed, it is understandable that the IGF covers a lot of ground. It 
would be beneficial if there might be deeper reflections on fewer issues
 if possible, so that there is greater depth in conversations as opposed
 to a much wider coverage. We understand that this might be difficult 
given what IGF sets out to do, but a more focused approach might help 
stakeholders have a better understanding of priorities and areas of 
focus. It will also be very helpful if all sessions have space for 
Q&amp;amp;A, even if it is for 10 minutes. It allows for listeners to 
reflect and also ask questions, where possible. &lt;/span&gt;&lt;/div&gt;
&lt;div class="views-field views-field-nothing-3"&gt;&lt;span class="field-content"&gt;&lt;br /&gt;For all inputs, please visit: https://intgovforum.org/en/igf-2025-proposed-issues (CIS's inputs are under ID322)&lt;/span&gt;&lt;/div&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/submission-to-igf-2025-call-for-thematic-inputs'&gt;https://cis-india.org/internet-governance/submission-to-igf-2025-call-for-thematic-inputs&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta, Yesha Tshering Paul, and Pallavi Bedi</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance Forum</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2025-03-06T06:36:10Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/privacy-policy-framework-for-indian-metal-health-apps">
    <title>Privacy Policy Framework for Indian Mental Health Apps </title>
    <link>https://cis-india.org/internet-governance/blog/privacy-policy-framework-for-indian-metal-health-apps</link>
    <description>
        &lt;b&gt;This report analyses the privacy policies of mental health apps in India and provides recommendations for making the policies not only legally compliant but also user-centric&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The report’s findings indicate a significant gap in the structure and content of privacy policies in Indian mental health apps. This highlights the need to develop a framework that can guide organisations in developing their privacy policies. Therefore, this report proposes a holistic framework to guide the development of privacy policies for mental health apps in India. It focuses on three key segments that are an essential part of the privacy policy of any mental health app. First, it must include factors considered essential by the Digital Personal Data Protection Act 2023 (DPDPA) such as consent mechanisms, rights of the data principal, provision to withdraw consent etc. Second, the privacy policy must state how the data provided by them to these apps will be used. Finally, developers must include key elements, such as provisions for third-party integrations and data retention policies.”&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Click to download the full research paper &lt;a class="external-link" href="https://cis-india.org/internet-governance/files/privacy-policy-framework.pdf"&gt;here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/privacy-policy-framework-for-indian-metal-health-apps'&gt;https://cis-india.org/internet-governance/blog/privacy-policy-framework-for-indian-metal-health-apps&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Chakshu Sang and Shweta Mohandas</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2025-01-10T00:11:24Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper">
    <title>Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper</title>
    <link>https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
&lt;p style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Read the full paper &lt;a href="https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper-pdf" class="internal-link" title="Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p id="docs-internal-guid-85683bb7-7fff-dab5-0977-5f8fb91cd4bd" style="text-align: justify;" dir="ltr"&gt;Political participation of women is fundamental to democratic processes and promotes building of more equitable and just futures. Rapid adoption of technology has created avenues for women to access the virtual public sphere, where they may have traditionally struggled to access the physical public spaces, due to patriarchal norms and violence in the physical sphere.&amp;nbsp; While technology has provided tools for political participation, information seeking, and mobilization, it has also created unsafe online spaces for women, thus often limiting their ability to actively engage online.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;This essay examines the emotional and technological underpinnings of gender-based violence faced by women in politics. It further explores how gender-based violence is weaponised to diminish the political participation and influence of women in the public eye. Through real-life examples of gendered disinformation and sexist hate speech targeting women in politics in India, we identify affective patterns in the strategies deployed to adversely impact public opinion and democratic processes. We highlight the emotional triggers that play a role in exacerbating online gendered harms, particularly for women in public life. We also examine the critical role of technology and online platforms in this ecosystem – both in perpetuating and amplifying this violence as well as attempting to combat it.&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;We argue that it is critical to investigate and understand the affective structures in place, and the operation of patriarchal hegemony that continues to create unsafe access to public spheres, both online and offline, for women. We also advocate for understanding technology design and identifying tools that can actually aid in combating TFGBV. Further, we point to the continued need for greater accountability from platforms, to mainstream gender related harms and combat it through diversified approaches.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper'&gt;https://cis-india.org/internet-governance/blog/technology-facilitated-gender-based-violence-and-women2019s-political-participation-in-india-a-position-paper&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Yesha Tshering Paul, Amrita Sengupta</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Gender</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2024-12-18T19:12:31Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices">
    <title>Digital Rights and ISP Accountability in India: An Analysis of Policies and Practices</title>
    <link>https://cis-india.org/internet-governance/blog/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices</link>
    <description>
        &lt;b&gt;This report presents a comprehensive evaluation of India's four largest Internet Service Providers (ISPs)—Reliance Jio, Bharti Airtel, Vodafone-Idea (Vi), and BSNL—examining their commitment to digital rights and transparency. &lt;/b&gt;
        
&lt;p id="docs-internal-guid-1de908cb-7fff-8363-e993-29b5365585ab" style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Read the full report &lt;a href="https://cis-india.org/internet-governance/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices-pdf" class="internal-link" title="Digital Rights and ISP Accountability in India: An Analysis of Policies and Practices PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;India's four largest Internet Service Providers (ISPs)—Reliance Jio, Bharti Airtel, Vodafone-Idea (Vi), and BSNL collectively serve 98% of India's internet subscribers, with Jio and Airtel commanding a dominant market share of 80.87%. The assessment comes at a critical juncture in India's digital landscape, marked by a 279.34% increase in internet subscribers from 2014 to 2024, alongside issues such as proliferation of internet shutdowns.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Adapting the Ranking Digital Rights' (RDR) 2022 methodology framework for its 2022 Telco Giants Scorecard, our analysis reveals significant disparities in governance structures and commitment to digital rights across these providers. Bharti Airtel emerges as the leader in governance framework implementation, maintaining dedicated human rights policies and board-level oversight. In contrast, Vi and Jio demonstrate mixed results with limited explicit human rights commitments, while BSNL exhibits the weakest governance structure with minimal human rights considerations. Notably, all ISPs lack comprehensive human rights impact assessments for their advertising and algorithmic systems.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The evaluation of freedom of expression commitments reveals systematic inadequacies across all providers. Terms and conditions are frequently fragmented and difficult to access, while providers maintain broad discretionary powers for account suspension or termination without clear appeal processes. There is limited transparency regarding content moderation practices and government takedown requests, coupled with insufficient disclosure about algorithmic decision-making systems that affect user experiences.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Privacy practices among these ISPs show minimal evolution since previous assessments, with persistent concerns about policy accessibility and comprehension. The investigation reveals limited transparency regarding algorithmic processing of personal data, widespread sharing of user data with third parties and government agencies, and inadequate user control over personal information. None of the evaluated ISPs maintain clear data breach notification policies, raising significant concerns about user data protection.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The concentrated market power of Jio and Airtel, combined with weak digital rights commitments across the sector, raises substantial concerns about the state of user privacy and freedom of expression in India's digital landscape. The lack of transparency in website blocking and censorship, inconsistent implementation of blocking orders, limited accountability in handling government requests, insufficient protection of user rights, and inadequate grievance redressal mechanisms emerge as critical areas requiring immediate attention.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;As India continues its rapid digital transformation, our findings underscore the urgent need for both regulatory intervention and voluntary industry reforms. The development of standardised transparency reporting, strengthened user rights protections, and robust accountability mechanisms will be crucial in ensuring that India's digital growth aligns with fundamental rights and democratic values.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices'&gt;https://cis-india.org/internet-governance/blog/digital-rights-and-isp-accountability-in-india-an-analysis-of-policies-and-practices&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Anubha Sinha, Yesha Tshering Paul, and Sherina Poyyail</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2025-01-23T10:04:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/draft-circular-on-digital-lending-2013-transparency-in-aggregation-of-loan-products-from-multiple-lenders">
    <title>Draft Circular on Digital Lending – Transparency in Aggregation of Loan Products from Multiple Lenders</title>
    <link>https://cis-india.org/internet-governance/blog/draft-circular-on-digital-lending-2013-transparency-in-aggregation-of-loan-products-from-multiple-lenders</link>
    <description>
        &lt;b&gt;CIS is grateful for the opportunity to submit comments on the “Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders” to the Reserve Bank of India. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem. Our comments look at two concerns addressed by the draft guidelines, i.e. reducing information asymmetry and market fairness. In addition to this we share comments around a third concern that requires additional scrutiny, i.e. data privacy and security.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Edited and reviewed by Amrita Sengupta&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;

&lt;div class="WordSection1" style="text-align: justify; "&gt;
&lt;p class="MsoNormal" style="text-align: justify; "&gt;&lt;span&gt;The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on the internet and digital technologies from policy and academic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and practices around the internet, technology and society in India, and elsewhere.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal" style="text-align: justify; "&gt;&lt;span&gt;CIS is grateful for the opportunity to submit comments on the “&lt;/span&gt;&lt;a href="https://www.rbi.org.in/Scripts/bs_viewcontent.aspx?Id=4424"&gt;&lt;span&gt;Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders&lt;/span&gt;&lt;/a&gt;&lt;span&gt;” to the Reserve Bank of India. Over the last twelve years, CIS has worked extensively on research around privacy, online safety, cross border flows of data, security, and innovation. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt; &lt;/span&gt;&lt;b&gt;&lt;span&gt;Introduction&lt;/span&gt;&lt;/b&gt;&lt;/h3&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;The draft circular on ‘Transparency in Aggregation of Loan Products from Multiple Lenders’ is a much needed and timely&lt;/span&gt;&lt;span&gt; document that builds on the Guidelines on Digital Lending. Both documents have maintained the principles of customer centricity and transparency at their core. Reducing information asymmetry and deceptive patterns in the digital lending ecosystem is of utmost importance, given the adverse effects experienced by borrowers. Digital lending is one of the fastest-growing fintech segments in India,&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[1]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; having grown exponentially from nine billion U.S. dollars in 2012 to nearly 150 billion dollars by 2020, and is estimated to reach 515 billion USD by 2030.&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[2]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; At the same time, accessing digital credit through digital lending applications has been found to be associated with a high risk to financial and psychological health due to a host of practices that lead to overindebtedness.&lt;a href="#_ftn3" name="_ftnref3"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[3]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; These include post contract exploitation through hidden transaction fees, abusive debt collection practices, privacy violations and fluctuations in interest rates. Both illegal/fraudulent and licensed lending service providers have been employing aggressive marketing and debt collection tactics&lt;a href="#_ftn4" name="_ftnref4"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[4]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; that exacerbate the risks of all the above harms.&lt;a href="#_ftn5" name="_ftnref5"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[5]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; With additional safeguards in place, the guidelines can provide a suitable framework to ensure borrowers have the opportunity and information needed to make an informed decision while accessing intermediated credit, and reduce harmful financial and health related consequences. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;In this submission, we seek to provide some comments on the broader issues the guidelines address. Our comments recommend additional safeguards, keeping in mind the gamut of services provided by lending service providers (LSPs). We will frame our comment around two main concerns addressed by the draft guidelines: 1) reducing information asymmetry 2) market fairness. In addition to this we will share comments around a third concern that requires additional scrutiny, i.e. 3) data privacy and security.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;&lt;span&gt;Reducing Information Asymmetry &lt;/span&gt;&lt;/b&gt;&lt;/h3&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;The guidelines aim to define responsibilities of LSPs in maintaining transparency to ensure borrowers are aware of the identity of the regulated entity (RE) providing the loan, and make informed decisions based on consistent information to weigh their options. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Comments:&lt;/span&gt;&lt;/b&gt;&lt;span&gt; Guideline iii suggests that the digital view should include information that helps the borrower to compare various loan offers. This includes “the name(s) of the regulated entity (RE) extending the loan offer, amount and tenor of loan, the Annual Percentage Rate (APR) and other key terms and conditions” alongside a link to the key facts statement (KFS). The earlier ‘Guidelines on Digital Lending’ specifies that APR should be an all-inclusive cost including margin, credit costs, operating costs, verification charges, processing fees etc. only excluding penalties, and late payment charges. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Recommendations:&lt;/span&gt;&lt;/b&gt;&lt;span&gt; All users of digital lending services may not be aware that APR is inclusive of all non-contingent charges. Requiring digital loan aggregators to provide messages/notifications boosting consumer awareness of regulations and their rights can help reduce violations. We also recommend that this information is made available in various languages such that a wide range of users are able to access this information. Further we recommend that accountability be laid on the LSPs to adhere to an inclusive platform design that allows for easy access to this information. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;&lt;span&gt;Market Fairness&lt;/span&gt;&lt;/b&gt;&lt;/h3&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Guidelines ii-iv also serve to outline practices to curb anti-competitive placement of digital loan products through regulating use of dark patterns and increasing transparency.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Comments: &lt;/span&gt;&lt;/b&gt;&lt;span&gt;Section ii mandates that LSPs must disclose the approach utilised to determine the willingness of lenders to offer a loan. Whether this estimation includes factors associated with the customer profile like age, income and occupation etc. should be clearly disclosed as well. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Recommendations:&lt;/span&gt;&lt;/b&gt;&lt;span&gt; Alongside the predictive estimate of the lender’s willingness, to improve transparency loan aggregators may be asked to share an overall rate of rejection or approval as well within the digital view. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;While the ‘Guidelines on Digital Lending’&lt;a href="#_ftn6" name="_ftnref6"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[6]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; clearly state that LSPs must charge any fees from the REs and not the borrowers, further clarification should be provided on whether LSPs can charge fees for loan aggregation services themselves, i.e. for providing information of available loan products.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;&lt;span&gt;Privacy and Data Security&lt;/span&gt;&lt;/b&gt;&lt;/h3&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;The earlier ‘Guidelines on Digital Lending’&lt;a href="#_ftn7" name="_ftnref7"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[7]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; require LSPs to only store minimal contact data regarding the customer and provide consumers the ability to seek their data being removed, i.e. the right to be forgotten by the provider, once they are no longer seeking their services. Personal financial information is not to be stored by LSPs. It is the responsibility of REs to ensure that LSPs do not store extraneous customer data, and to stipulate clear policy guidelines regarding the storage and use of customer data.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Comments: &lt;/span&gt;&lt;/b&gt;&lt;span&gt;It is important to ascertain the nature of anonymised and personally identifiable customer data that may be currently utilised by LSPs or processed on their platforms, in the course of providing a range of services within the digital credit ecosystem to borrowers and lenders. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Certain functions that loan aggregators perform may expand their role beyond a simple intermediary. LSPs also provide services assessing borrower’s creditworthiness, payment services, and agent-led debt collection services for lenders. Some LSPs may be involved in more than one stage of the loan process which may make them privy to additional personal information about a borrower. There may be cases in which a consumer registers on an LSP’s platform without going ahead with any loan applications. It is unclear who is responsible for maintaining data security and privacy or providing grievance redressal at these times.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Section ii allows them to provide estimates of lenders’ willingness to borrowers. Some LSPs connecting REs with borrowers may also provide services using alternative and even non-financial data to assess the creditworthiness of thin-file credit seekers. Whether there are any restrictions on the use of AI tools in these processes, and the handling of customer data should also be clarified or limited. The right to be forgotten may be difficult to enforce with the use of certain machine learning and other artificial intelligence models. As innovation in credit scoring mechanisms continues, it is also important to bring such financial service providers under the ambit of guidelines for digital lending platforms. &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt;Recommendations:&lt;/span&gt;&lt;/b&gt;&lt;span&gt; The burden of maintaining privacy and data security should fall on aggregators of loan products in addition to regulated entities as well. Include guidelines limiting the use of PII (and PFI if applicable) for purposes other than connecting borrowers to a loan provider without consumer consent. Informed and explicit consumer consent should be sought for any additional purposes like marketing, market research, product development, cross-selling, delivery of other financial and commercial services, including providing access to other loan products in the future.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Often consumers are required to register on a platform by providing contact details and other personal information. An initial digital view of loan products available could be displayed for all users without registering to help borrowers determine whether they would like to register for the LSP’s services. This can help reduce the amount of consumer contact information and other personally identifiable information (PII) that is collected by LSPs.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;b&gt;&lt;span&gt; &lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;&lt;span&gt;Emerging Risks&lt;/span&gt;&lt;/b&gt;&lt;/h3&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Emerging consumer risks within the digital lending ecosystem expose borrowers to additional risks like over-indebtedness and risks arising from fraud, data misuse, lack of transparency and inadequate redress mechanisms.&lt;a href="#_ftn8" name="_ftnref8"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[8]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; These draft guidelines clearly layout mechanisms to reduce risks arising from lack of transparency. Similar efforts need to be put behind reduction of data misuse by delimiting the time period and – and the risk for overindebtedness &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;One of the biggest sources of consumer risk has been at the debt recovery stage. Aggressive debt collection practices &lt;/span&gt;&lt;span&gt;have had deleterious effects on consumers’ mental health, social standing and even lead some to consider suicide. Extant guidelines assume a recovery agent will be contacting the consumer.&lt;a href="#_ftn9" name="_ftnref9"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[9]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; LSPs may also set up automated payments and use digital communication like app notifications, messages and automated calls in the debt recovery process as well. The impact of repeated notifications and automated debt payments also needs to be considered in future iterations of guidelines addressing risk in the digital lending ecosystem.&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div style="text-align: justify; "&gt;
&lt;hr /&gt;
&lt;div id="ftn1"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[1]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;“Funding distribution of FinTech companies in India in second quarter of 2023, by segment”, &lt;i&gt;Statista&lt;/i&gt;, accessed 30 May 2024, &lt;/span&gt;&lt;/span&gt;&lt;a href="https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/"&gt;&lt;span&gt;https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/&lt;/span&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn2"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[2]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;Anushka Sengupta, “India’s digital lending market likely to grow $515 bn by 2030: Report”, &lt;i&gt;Economic Times,&lt;/i&gt; 17 June 2023, &lt;/span&gt;&lt;/span&gt;&lt;a href="https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337"&gt;&lt;span&gt;https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn3"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[3]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;“Mobile Instant Credit: Impacts, Challenges, and Lessons for Consumer Protection”, Center for Effective Global Action, September 2023, &lt;/span&gt;&lt;/span&gt;&lt;a href="https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf"&gt;&lt;span&gt;https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf&lt;/span&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn4"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[4]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;Jinit Parmar, “Ruthless Recovery Agents, Aggressive Loan Outreach Put the Spotlight on Bajaj Finance”, &lt;i&gt;Moneycontrol&lt;/i&gt;, 18 April 2023, &lt;/span&gt;&lt;/span&gt;&lt;a href="https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html"&gt;&lt;span&gt;https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html&lt;/span&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn5"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[5]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;Prudhviraj Rupavath, “Suicide Deaths Mount after Unregulated Lending Apps Resort to Exploitative Recovery Practices”, &lt;i&gt;Newsclick&lt;/i&gt;, 26 December 2020 &lt;/span&gt;&lt;/span&gt;&lt;a href="https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices"&gt;&lt;span&gt;https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt;Priti Gupta and Ben Morris, “India's loan scams leave victims scared for their lives”, &lt;i&gt;BBC&lt;/i&gt;, 7 June 2022, &lt;/span&gt;&lt;a href="https://www.bbc.com/news/business-61564038"&gt;&lt;span&gt;https://www.bbc.com/news/business-61564038&lt;/span&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn6"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[6]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; Section 4.1, Guidelines on Digital Lending, 2022.&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn7"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[7]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;Section 11, Guidelines on Digital Lending, 2022.&lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal"&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn8"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[8]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;“The Evolution of the Nature and Scale of DFS Consumer Risks: A Review of Evidence”, CGAP, February 2022, &lt;/span&gt;&lt;a href="https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf"&gt;&lt;span&gt;https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf&lt;/span&gt;&lt;/a&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn9"&gt;
&lt;p class="MsoNormal"&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;&lt;sup&gt;&lt;sup&gt;&lt;span&gt;[9]&lt;/span&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;span&gt; &lt;span&gt;Section 2, Outsourcing of Financial Services - Responsibilities of regulated entities employing Recovery Agents, 2022.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/draft-circular-on-digital-lending-2013-transparency-in-aggregation-of-loan-products-from-multiple-lenders'&gt;https://cis-india.org/internet-governance/blog/draft-circular-on-digital-lending-2013-transparency-in-aggregation-of-loan-products-from-multiple-lenders&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>garima</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Digital Lending</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2024-07-03T16:40:51Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/comments-to-the-draft-digital-competition-bill">
    <title>Comments to the Draft Digital Competition Bill, 2024</title>
    <link>https://cis-india.org/internet-governance/blog/comments-to-the-draft-digital-competition-bill</link>
    <description>
        &lt;b&gt;This submission is a response by researchers at the Centre for Internet and Society India (CIS) to the draft Digital Competition Bill, 2024, published by the Committee on Digital Competition Law (CDCL), Ministry of Corporate Affairs (MCA), (hereafter “draft DCB” or “draft Bill”).


&lt;/b&gt;
        
&lt;p&gt;We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify;"&gt;We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;At the outset, CIS affirms the Committee’s approach to transition from a predominantly ex-post to an ex-ante approach for regulating competition in digital markets. The Committee’s assessment of the ex-post regime being too time-consuming for the digital domain has been substantiated by frequent and expensive delays in antitrust disputes, a fact that has also recently drawn the attention of the Ministry of Corporate Affairs.&amp;nbsp; And not just in India, the ex-post regime has been found to be too time-consuming in other jurisdictions as well, as a consequence of which many other countries are also moving towards an ex-post regime for digital markets. This also allows India to be in harmony with both developing and developed countries, which makes regulating global competition more consistent and efficient.&amp;nbsp; In fact, “international cooperation between competition authorities” and “greater coherence between regulatory frameworks” are key in facilitating global investigations and lowering the cost of doing business.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Moreover, by adopting a principles-based approach to designing the law’s obligations, the draft Bill also addresses the concern that ex-ante regulations, due to their prescriptive nature, tend to be sector-agnostic. The fact that these principles are based on the findings of the Parliamentary Standing Committee’s (PSC) Report on ‘Anti-Competitive Practices by Big Tech Companies’ only lends them more evidence. The draft DCB empowers the Commission to clarify the Obligations for different services, and also provides CCI with the flexibility to undertake independent consultations to accommodate varying contexts and the needs of different core digital services. We do, however, have specific comments regarding implementing some of these provisions, which are elaborated in the accompanying document.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We would also like to emphasise that adequate enforcement of an ex-ante approach requires bolstering and strengthening regulatory capacity. Therefore, to minimise risks relating to underenforcement as well as overenforcement, CCI, its Digital Markets and Data Unit (DMDU), and the Director General’s (DG) office will have to substantially increase their technical capacity. A comparison of CCI’s current strength with its global counterparts that have adopted or are in the process of adopting an ex-ante approach to competition regulation reveals a stark picture. For example, the European Union (EU) had over 870 people in its DG COMP unit in 2022, and its DG CONNECT unit is expected to hire another 100 people in 2024 alone. Similarly, the United Kingdom’s Competition and Markets Authority (CMA) has a permanent staff of 800+, the Japan Fair Trade Commission (JTFC) has about 400 officials just for regulating anti-competitive conduct, and South Korea’s KFTC has about 600 employees. In contrast, CCI and DG, combined, have a sanctioned strength of only 195 posts, out of which 71 remain vacant. Bridging this capacity gap through frequent and high-quality recruitment is, therefore, the need of the hour. Most importantly, there is a need to create a culture of interdisciplinary coordination among legal, technical, and economic domains.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Moreover, as we come to rely on an increasingly digitised economy, most technology companies will work with critical technology components such as key infrastructure, algorithms, and Artificial Intelligence to business models that are based on data collection and processing practices. Consequently, there will be a need to bolster CCI’s capacity in the technical domain by hiring and integrating new roles including technologists, software and hardware engineers, product managers, UX designers, data scientists, investigative researchers, and subject matter experts dealing with new and emerging areas of technology.21 Therefore, we recommend CCI to ensure that the proposed DMDU has the requisite diversity of skills to effectively use existing tools for enforcement and is also able to keep pace with new and emerging technological developments.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Along with this overall observation of CCI's capacity, we have also submitted detailed comments on specific clauses of the draft DCB. These submissions are structured across the following six categories: i) Classification of Core Digital Services; ii) Designation of a Systemically Significant Digital Enterprise (SSDE) and Associate Digital Enterprise (ADE); iii) Obligations on SSDEs and ADEs; iv) Powers of the Commission to Conduct an Inquiry; v) Penalties and Appeals; and vi) Powers of the Central Government. In addition to these suggestions, the detailed comments and their summarised version focus on three important gaps in the draft DCB – limited representation from workers’ groups and MSMEs, exclusion of merger and acquisition (M&amp;amp;A) from the discussions, and lack of a formalised framework for interregulatory coordination.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;For our full comments, &lt;a href="https://cis-india.org/telecom/comments-to-draft-digital-competition-bill.pdf" class="internal-link"&gt;click here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For a detailed summary of our comments, &lt;a href="https://cis-india.org/internet-governance/20240517_ddcb-comments-summary" class="internal-link" title="20240517_DDCB comments summary"&gt;click here&lt;/a&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/comments-to-the-draft-digital-competition-bill'&gt;https://cis-india.org/internet-governance/blog/comments-to-the-draft-digital-competition-bill&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Abhineet Nayyar, Isha Suri, and Pallavi Bedi (in alphabetical order)</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>Digital Media</dc:subject>
    
    
        <dc:subject>Digital Technologies</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2024-06-11T10:13:22Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/consultation-on-gendered-information-disorder-in-india">
    <title>Consultation on Gendered Information Disorder in India</title>
    <link>https://cis-india.org/internet-governance/blog/consultation-on-gendered-information-disorder-in-india</link>
    <description>
        &lt;b&gt;On 14th and 15th March 2024, Centre for Internet and Society (CIS) collaborated with Point of View (POV) to organise a consultation in Mumbai to explore the phenomenon of gendered information disorder in India, spanning various aspects from healthcare and sexuality to financial literacy, and the role of digital mediums, social media platforms and AI in exacerbating these issues.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;The event was convened by Amrita Sengupta (Research and Programme Lead, CIS), Yesha Tshering Paul (Researcher, CIS), Bishakha Datta (Programme Lead, POV)&amp;nbsp; and Prarthana Mitra (Project Anchor, POV)..* Download the event report &lt;a href="https://cis-india.org/internet-governance/event-report-consultation-on-gendered-information-disorder-in-india-pdf" class="internal-link" title="Event Report: Consultation on Gendered Information Disorder in India pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The event brought together experts, researchers and grassroots activists from Maharashtra and across the country to discuss their experiences with information disorder, and the multifaceted challenges posed by misinformation, disinformation and malinformation targeting gender and sexual identities.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Understanding Information Disorders: &lt;/strong&gt;The consultation commenced with a look at the wide spectrum of information disorder by Yesha Tshering Paul and Amrita Sengupta. Misinformation&lt;a href="#_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; was highlighted as false information disseminated unintentionally, such as inaccurate COVID cures that spread rapidly during the pandemic. In contrast, disinformation involves the intentional spread of false information to cause harm, exemplified by instances like deepfake pornography. A less recognized form, malinformation, involves the deliberate misuse of accurate information to cause harm, as seen in the misleading representation of regret rates among trans individuals who have undertaken gender affirming procedures. Yesha highlighted that the definitions of these concepts are often varied, and thus the importance of moving beyond definitions to centre user experiences of this phenomenon.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The central theme of this discussion was the concept of “gendered” information disorder, referring to the targeted dissemination of false or harmful online content based on gender and sexual identity. This form of digital misogyny intersects with other societal marginalizations, disproportionately affecting marginalised genders and sexualities. The session also emphasised the critical link between information disorders and gendered violence (both online and in real life). Such disorders perpetuate stereotypes, gender-based violence, and silences victims, fostering an environment that empowers perpetrators and undermines victims' experiences. &lt;em&gt; &lt;/em&gt;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Feminist Digital Infrastructure: &lt;/strong&gt;Digital infrastructures shape our online spaces. Sneha PP (Senior Researcher, CIS) introduced the concept of &lt;a href="https://cis-india.org/Feminist_Infrastructures_Report"&gt;feminist infrastructures&lt;/a&gt; as a potential solution that helps mediate discourse around gender, sexuality, and feminism in the digital realm. Participant discussions emphasised the need for accessible, inclusive, and design-conscious digital infrastructures that consider the intersectionality and systemic inequalities impacting content creation and dissemination. Strategies were discussed to address online gender-based violence and misinformation, focusing on survivor-centric approaches and leveraging technology for storytelling.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Gendered Financial Mis-/Dis-information: &lt;/strong&gt;Garima Agrawal (Researcher, CIS) with inputs by Debarati Das (Co-Lead, Capacity Building at PoV) and Chhaya Rajput (Helpline Facilitator, &lt;a href="https://techsakhi.in/"&gt;Tech Sakhi&lt;/a&gt;) led the session by&lt;strong&gt; &lt;/strong&gt;highlighting&lt;strong&gt; &lt;/strong&gt;gender disparities in digital and financial literacy and access to digital devices and financial services in India, despite women constituting a higher percentage of new internet users. This makes marginalised users more vulnerable to financial scams. Drawing from the ongoing &lt;a href="https://cis-india.org/raw/user-experiences-of-digital-financial-risks-and-harms"&gt;financial harms project &lt;/a&gt;at CIS, Garima spoke about the diverse manifestations of financial information disorders arising from misleading information that results in financial harm, ranging from financial influencers (and in some cases deepfakes of celebrities) endorsing platforms they do not use, to fake or unregulated loan and investment services deceiving users. Breakout groups of participants then analysed several case studies of real-life financial frauds that targeted women and the queer community to identify instances of misinformation, disinformation and malinformation. Emotional manipulation and the exploitation of trust were identified as key tactics used to deceive victims, with repercussions extending beyond monetary loss to emotional, verbal, and even sexual violence against these individuals.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Fact-Checking Fake News and Stories: &lt;/strong&gt;The pervasive issue of fake news in India was discussed in depth, especially in the era of widespread social media usage. Only 41% of Indians trust the veracity of the information encountered online. Aishwarya Varma, who works at &lt;a href="https://www.thequint.com/news/webqoof"&gt;Webqoof&lt;/a&gt; (The Quint’s fact checking initiative) as a Fact Check Correspondent, led an informative session detailing the&lt;strong&gt; &lt;/strong&gt;various accessible tools that can be used to fact-check and debunk false information. Participants engaged in hands-on activities by using their smartphones for reverse image searches, emphasising the importance of verifying images and their sources. Archiving was identified as another crucial aspect to preserve accurate information and debunk misinformation.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Gendered Health Mis-/Dis-information: &lt;/strong&gt;This participant-led discussion highlighted structural gender biases in healthcare and limited knowledge about mental health and menstrual health as significant concerns, along with the discrimination and social stigma faced by the LGBTQ+ community in healthcare facilities. One participant brought up their difficulty accessing sensitive and non-judgmental healthcare, and the insensitivity and mockery faced by them and other trans individuals in healthcare facilities. Participants suggested the increased need for government-funded campaigns on sexual and reproductive health rights and menstrual health, and&amp;nbsp; the importance of involving marginalised communities in healthcare related decision-making to bring about meaningful change.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Mis-/Dis-information around Sex, Sexuality, and Sexual Orientation:&lt;/strong&gt; Paromita Vohra, Founder and Creative Director of &lt;a href="https://agentsofishq.com/"&gt;Agents of Ishq&lt;/a&gt;—a &amp;nbsp;multi-media project about sex, love and desire that uses various artistic mediums to create informational material and an inclusive, positive space for different expressions of sex and sexuality—led this session. She started with an examination of the term “disorder” and its historical implications, and highlighted how religion, law, medicine, and psychiatry had previously led to the classification of homosexuality as a “disorder”. The session delved into the misconceptions surrounding sex and sexuality in India, advocating for a broader understanding that goes beyond colonial knowledge systems and standardised sex education. She brought up the role of media in altering perspectives on factual events, and the need for more initiatives like Agents of Ishq to address the need for culturally sensitive and inclusive sexuality language and education that considers diverse experiences, emotions, and identities.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Artificial Intelligence and Mis-/Dis-information: &lt;/strong&gt;Padmini Ray Murray, Founder of &lt;a href="https://designbeku.in/5af7a99eb82f45889b682cfe9e52b3ae"&gt;Design Beku&lt;/a&gt;—a  collective that emerged from a desire to explore how technology and  design can be decolonial, local, and ethical— talked about the role of  AI in amplifying information disorder and its ethical considerations,  stemming from its biases in language representation and content  generation. Hindi and regional Indian languages remain significantly  under-represented in comparison to English content, leading to skewed  AI-generated content. Search results reflect the gendered biases in AI  and further perpetuate existing stereotypes and reinforce societal  biases. She highlighted the real-world impacts of AI on critical  decision-making processes such as loan approvals, and the influence of  AI on public opinion via media and social platforms. Participants  expressed concerns about the ethical considerations of AI, and  emphasised the need for responsible AI development, clear policies, and  collaborative efforts between tech experts, policymakers, and the public.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify;"&gt;* The Centre for Internet and Society undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. Point of View focuses on sexuality, disability and technology to empower women and other marginalised genders to shape and inhabit digital spaces.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;a href="#_ftnref1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Claire Wardle, &lt;em&gt;Understanding Information Disorder (2020). &lt;/em&gt;&lt;a href="https://firstdraftnews.org/long-form-article/understanding-information-disorder/"&gt;https://firstdraftnews.org/long-form-article/understanding-information-disorder/&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/consultation-on-gendered-information-disorder-in-india'&gt;https://cis-india.org/internet-governance/blog/consultation-on-gendered-information-disorder-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta and Yesha Tshering Paul</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Gender, Welfare, and Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2024-10-15T10:57:06Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu">
    <title>Reconfiguring Data Governance: Insights from India and the EU</title>
    <link>https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu</link>
    <description>
        &lt;b&gt;This policy paper is the result of a workshop organised jointly by the Tilburg Institute of Law, Technology and Society, Netherlands, the Centre for Communication Governance at the National Law University Delhi, India and the Centre for Internet &amp; Society, India in January, 2023. The workshop brought together a number of academics, researchers, and industry representatives in Delhi to discuss a range of issues at the core of data governance theory and practice. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ReconfiguringDataGovernance.png/@@images/70165fe1-cc66-4cac-9f99-b7485c87218a.png" alt="Reconfiguring Data Governance" class="image-inline" title="Reconfiguring Data Governance" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet &amp;amp; Society, India.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Click to download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/reconfiguring-data-governance.pdf"&gt;&lt;b&gt;report&lt;/b&gt;&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu'&gt;https://cis-india.org/internet-governance/blog/reconfiguring-data-governance-insights-from-india-and-eu&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Swati Punia, Srishti Joshi, Siddharth Peter De Souza, Linnet Taylor, Jhalak M. Kakkar, Isha Suri, Arindrajit Basu, and Anushka Mittal</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Data Management</dc:subject>
    

   <dc:date>2024-02-20T00:30:00Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/information-disorders-and-their-regulation">
    <title>Information Disorders and their Regulation</title>
    <link>https://cis-india.org/internet-governance/blog/information-disorders-and-their-regulation</link>
    <description>
        &lt;b&gt;The Indian media and digital sphere, perhaps a crude reflection of the socio-economic realities of the Indian political landscape, presents a unique and challenging setting for studying information disorders. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In the last few years, ‘fake news’ has garnered interest across the political spectrum, as affiliates of both the ruling party and its opposition have seemingly partaken in its proliferation. The COVID-19 pandemic added to this phenomenon, allowing for xenophobic, communal narratives, and false information about health-protective behaviour to flourish, all with potentially deadly effects. This report maps and analyses the government’s regulatory approach to information disorders in India and makes suggestions for how to respond to the issue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this study, we gathered information by scouring general search engines, legal databases, and crime statistics databases to cull out data on a) regulations, notifications, ordinances, judgments, tender documents, and any other legal and quasi-legal materials that have attempted to regulate ‘fake news’ in any format; and b) news reports and accounts of arrests made for allegedly spreading ‘fake news’. Analysing this data allows us to determine the flaws and scope for misuse in the existing system. It also gives us a sense of the challenges associated with regulating this increasingly complicated issue while trying to avoid the pitfalls of the present system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Click to download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/information-disorder-their-regulation.pdf/"&gt;full report here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/information-disorders-and-their-regulation'&gt;https://cis-india.org/internet-governance/blog/information-disorders-and-their-regulation&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Torsha Sarkar, Shruti Trikanad, and Anoushka Soni</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Information Disorders</dc:subject>
    
    
        <dc:subject>Access to Knowledge</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Information Security</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    

   <dc:date>2024-01-31T14:20:20Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/about/reports/fcra-july-september-2023">
    <title>FCRA July - September 2023</title>
    <link>https://cis-india.org/about/reports/fcra-july-september-2023</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/about/reports/fcra-july-september-2023'&gt;https://cis-india.org/about/reports/fcra-july-september-2023&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2023-10-18T23:49:10Z</dc:date>
   <dc:type>File</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support">
    <title>Digital Delivery and Data System for Farmer Income Support</title>
    <link>https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support</link>
    <description>
        &lt;b&gt;This report, jointly published by the Centre for Internet &amp; Society and Privacy International, highlights the digital systems deployed by the government to augment farmer income. It analyses the PM-Kisan and Kalia schemes in Odisha and Andhra Pradesh. &lt;/b&gt;
        &lt;h2&gt;Executive Summary&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;This study provides an in-depth analysis of two direct cash transfer schemes in India – Krushak Assistance for Livelihood and Income Augmentation (KALIA) and Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) – which aim to provide income support to farmers. The paper examines the role of data systems in the delivery and transfer of funds to the beneficiaries of these schemes, and analyses their technological framework and processes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We find that the use of digital technologies, such as direct benefit transfer (DBT) systems, can improve the efficiency and ensure timely transfer of funds. However, we observe that the technology-only system is not designed with the last beneficiaries in mind; these people not only have no or minimal digital literacy but are also faced with a lack of technological infrastructure, including internet connectivity and access to the system that is largely digital.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Necessary processes need to be implemented and personnel on the ground enhanced in the existing system, to promptly address the grievances of farmers and other challenges.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This study critically analyses the direct cash transfer scheme and its impact on the beneficiaries. We find that despite the benefits of direct benefit transfer (DBT) systems, there have been many instances of failures, such as the exclusion of several eligible households from the database.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The study also looks at gender as one of the components shaping the impact of digitisation on beneficiaries. We also identify infrastructural and policy constraints, in sync with the technological framework adopted and implemented, that impact the implementation of digital systems for the delivery of welfare. These include a lack of reliable internet connectivity in rural areas and low digital literacy among farmers. We analyse policy frameworks at the central and state levels and find discrepancies between the discourse of these schemes and their implementation on the ground.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We conclude the study by discussing the implications of datafication, which is the process of collecting, analysing, and managing data through the lens of data justice. Datafication can play a crucial role in improving the efficiency and transparency of income support schemes for farmers. However, it is important to ensure that the interests of primary beneficiaries are considered – the system should work as an enabling, not a disabling, factor. This appears to be the case in many instances since the current system does not give primacy to the interests of farmers. We offer recommendations for policymakers and other stakeholders to strengthen these schemes and improve the welfare of farmers and end users.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://cis-india.org/internet-governance/files/digital-tools-farmers-report/at_download/file" class="external-link"&gt;&lt;b&gt;Click to download the full report&lt;/b&gt;&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support'&gt;https://cis-india.org/internet-governance/blog/cis-privacy-international-digital-delivery-and-data-system-for-farmer-income-support&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sameet</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Digital Technologies</dc:subject>
    
    
        <dc:subject>Data Governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2023-10-18T23:40:25Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/files/digital-tools-farmers-report">
    <title>Digital Tools Farmers Report</title>
    <link>https://cis-india.org/internet-governance/files/digital-tools-farmers-report</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/files/digital-tools-farmers-report'&gt;https://cis-india.org/internet-governance/files/digital-tools-farmers-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Sameet Panda</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2023-10-18T23:13:01Z</dc:date>
   <dc:type>File</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy">
    <title>Deceptive Design in Voice Interfaces: Impact on Inclusivity, Accessibility, and Privacy </title>
    <link>https://cis-india.org/internet-governance/blog/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy</link>
    <description>
        &lt;b&gt;This article was commissioned by the Pranava Institute, as part of their project titled Design Beyond Deception, supported by the University of Notre Dame - IBM's Tech Ethics Lab.” The article examines the design of voice interfaces (VI) to anticipate potential deceptive design patterns in VIs. It also presents design and regulatory recommendations to mitigate these practices. &lt;/b&gt;
        &lt;p&gt;The original blog post can be accessed &lt;a class="external-link" href="https://www.design.pranavainstitute.com/post/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Voice Interfaces (VIs) have come a long way in recent years and are easily available as inbuilt technology with smartphones, downloadable applications, or standalone devices. In line with growing mobile and internet connectivity, there is now an increasing interest in India in internet-based multilingual VIs which have the potential to enable people to access services that were earlier restricted by language (primarily English) and interface (text-based systems). This current interest has seen even global voice applications such as Google Home and Amazon’s Alexa being available in &lt;a class="itht3 TWoY9" href="https://www.businesstoday.in/technology/news/story/now-talk-to-alexa-seamlessly-in-hindi-english-and-hinglish-231469-2019-10-09" rel="noopener noreferrer" target="_blank"&gt;Hindi&lt;/a&gt; (Singal, 2019) as well as the &lt;a class="itht3 TWoY9" href="https://voice.cis-india.org/#mapping-actors" rel="noopener noreferrer" target="_blank"&gt;growth&lt;/a&gt; of multilingual voice bots for certain banks, hotels, and hospitals (Mohandas, 2022).&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;The design of VIs can have a significant impact on the behavior of the people using them. Deceptive design patterns or design practices that trick people into taking actions they might otherwise not take (Tech Policy Design Lab, n.d.), have gradually become pervasive in most digital products and services. Their use in visual interfaces has been widely &lt;a class="itht3 TWoY9" href="https://dl.acm.org/doi/pdf/10.1145/3400899.3400901" rel="noopener noreferrer" target="_blank"&gt;criticized&lt;/a&gt; by researchers (Narayanan, Mathur, Chetty, and Kshirsagar, 2020), along with recent &lt;a class="itht3 TWoY9" href="https://tacd.org/manipulative-design-practices-online-what-policy-solutions-for-the-eu-and-the-u-s/" rel="noopener noreferrer" target="_blank"&gt;policy interventions&lt;/a&gt; (Schroeder and Lützow-Holm Myrstad, 2022) as well. As VIs become more relevant and mainstream, it is critical to anticipate and address the use of deceptive design patterns in them. This article, based on our learnings from the &lt;a class="itht3 TWoY9" href="http://voice.cis-india.org/index.html" rel="noopener noreferrer" target="_blank"&gt;study&lt;/a&gt; of VIs in India, examines the various types of deceptive design patterns in VIs and focuses on their implications in terms of linguistic barriers, accessibility, and privacy.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Potential deceptive design patterns in VIs&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Our research findings suggest that VIs in India are still a long way off from being inclusive, accessible and privacy-preserving. While there has been some development in multilingual VIs in India, their compatibility has been limited to a few Indian languages (Mohandas, 2022) (Naidu, 2022)., The potential of VIs as a tool for people with vision loss and certain cognitive disabilities such as dyslexia is widely recognized (Pradhan, Mehta, and Findlater, 2018), but our conversations suggest that most developers and designers do not consider accessibility when conceptualizing a voice-based product, which leads to interfaces that do not understand non standard speech patterns, or have only text-based privacy policies (Mohandas, 2022). Inaccessible privacy policies full of legal jargon along with the lack of regulations specific to VIs,  also make people vulnerable to privacy risks.&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Deceptive design patterns can be used by companies to further these gaps in VIs. As with visual interfaces, the affordances and attributes of VI can determine the way in which they can be used to manipulate behavior. Kentrell Owens, et.al in their recent &lt;a class="itht3 TWoY9" href="https://homes.cs.washington.edu/~kentrell/static/papers/owensEuroUSEC2022-preprint.pdf" rel="noopener noreferrer" target="_blank"&gt;research&lt;/a&gt; lay down six unique properties of VIs that may be used to implement deceptive design patterns (Owens, Gunawan, Choffnes, Emami-Naeini, Kohno, and Roesner, 2022). Expanding upon these properties, and drawing from our research, we look at how they can be exacerbated in India.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Making processes cumbersome&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;VIs are often limited by their inability to share large amounts of information through voice. They thus operate in combination with a smartphone app or a website. This can be intentionally used by platforms to make processes such as changing privacy settings or accessing the full privacy notice inconvenient for people to carry out. In India, this is experienced while unsubscribing from services such as Amazon Prime (Owens et al., 2022). Amazon Echo Dot presently allows individuals to subscribe to an Amazon Prime membership using a voice command, but directs them to use the website in order to unsubscribe from the membership. This can also manifest in the form of canceling orders and changing privacy settings.&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;VIs follow a predetermined linear structure that ensures a tightly controlled interaction. People make decisions based on the information they are provided with at various steps. Changing their decision or switching contexts could involve going back several steps. People may accept undesirable actions from the VI in order to avoid this added effort (Owens et al., 2022). The urgency to make decisions on each step can also cause people to make unfavorable choices such as allowing consent to third party apps. The VI may prompt advertisements and push for the company’s preferred services in this controlled conversation structure, which the user cannot side-step. For example, while setting up the Google voice assistant on any device, it nudges people to sign into their Google account. This means the voice assistant gets access to their web and app activity and location history at this step. While the data management of Google accounts can be tweaked through the settings, it may get skipped during a linear set-up structure. Voice assistants can also push people to opt into features such as ads personalisation, default news sources, and location tracking.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Making options difficult to find&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Discoverability is another challenge for VIs. This means that people might find it difficult to discover available actions or options using just voice commands. This gap can be misused by companies to trick people into making undesirable choices. For instance, while purchasing items, the VI may suggest products that have been sponsored and not share full information on other cheaper products, forcing people to choose without complete knowledge of their options. Many mobile based voice apps in India use a combination of images or icons with the voice prompts to enable discoverability of options and potential actions, which excludes people with vision loss (Naidu, 2022). These apps comprise a voice layer added to an otherwise touch-based visual platform so that people are able to understand and navigate through all available options using the visual interface, and use voice only for purposes such as searching or narrating. This means that these apps cannot be used through voice alone, making them disadvantageous for people with vision loss.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Discreet integration with third parties&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;VIs can use the same voice for varying contexts. In the case of Alexa, Skills, which are apps on its platform, have the same voice output and invocation phrases as its own in-built features. End users find it difficult to differentiate between an interaction with Amazon and that with Skills which are third-party applications. This can cause users to share information that they otherwise would not have with third parties (Mozilla Foundation, 2022). There are numerous Amazon Skills inHindi and people might not be aware that the developers of these Skills are &lt;a class="itht3 TWoY9" href="https://www.theverge.com/2021/3/5/22315211/amazon-alexa-skills-how-to-remove-security-privacy-problems" rel="noopener noreferrer" target="_blank"&gt;not vetted &lt;/a&gt;by Amazon. This misunderstanding can create significant privacy or security risks if Skills are linked to contacts, banking, or social media accounts.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Lack of language inclusivity &lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;The lack of local language support, colloquial translations, and accents can lead to individuals not receiving clear and complete information. VI’s failure to understand certain accents can also make people feel isolated (Harwell, 2018). While in India voice assistants and even voice bots are available in few Indic languages, the default initial setup, privacy policies, and terms and conditions are still in English. The translated policies also use literary language which is difficult for people to understand, and miss out on colloquial terms. This could mean that the person might have not fully understood these notices and hence not have given informed consent. Such use of unclear language and unavailability of information in Indic languages can be viewed as a deceptive design pattern.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Making certain choices more apparent &lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;The different dimensions of voice such as volume, pitch, rate, fluency, pronunciation, articulation, and emphasis can be controlled and manipulated to implement deceptive design patterns. VIs may present the more privacy-invasive options more loudly or clearly, and the more privacy-preserving options more softly or quickly. It can use tone modulations to shame people into making a specific choice (Owens et al., 2022). For example, media streaming platforms may ask people to subscribe for a premium account to avoid ads in normal volume and mention the option to keep ads in a lower volume. Companies have also been observed to discreetly integrate product advertisements in voice assistants using tone. SKIN, a neurotargeting advertising strategy business, used a change of tone of the voice assistant to suggest a dry throat to advertise a drink (Chatellier, Delcroix, Hary, and Girard-Chanudet, 2019).&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;The attribution of gender, race, class, and age through stereotyping can create a persona of the VI for the user. This can extend to personality traits, such as an extroverted or an introverted, docile or aggressive character (Simone, 2020). The default use of female voices with a friendly and polite persona for voice assistants has drawn criticism for perpetuating harmful gender stereotypes (Cambre and Kulkarni, 2019). Although there is an option to change the wake word “Alexa” in Amazon’s devices, certain devices and third party apps do not work with another wake word (Ard, 2021). Further, projection of demographics can also be used to employ deceptive design patterns. For example, a VI persona that is constructed to create a perception of intelligence, reliability, and credibility can have a stronger influence on people’s decisions. Additionally, the effort to make voice assistants as human sounding as possible without letting people know they are human, could create a number of &lt;a class="itht3 TWoY9" href="https://www.nytimes.com/2019/05/22/technology/personaltech/ai-google-duplex.html" rel="noopener noreferrer" target="_blank"&gt;issues&lt;/a&gt; (X. Chen and Metz, 2019). First time users might divulge sensitive information thinking that they are interacting with a person. This becomes more ethically challenging when persons with vision loss are not able to know who they are interacting with.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Recording without notification &lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Owens et al speak about VIs occupying physical domains due to which they have a much wider impact as opposed to a visual interface (Owens et al., 2022). The always-on nature of virtual assistants could result in personal information of a guest being recorded without their knowledge or consent as consent is only given at the setup stage by the owner of the device or smartphone.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Making personalization more convenient through data collection&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;VIs are trained to adapt to the experience and expertise of the user. Virtual assistants provide personalization and the possibility to download a number of skills, save payment information, and phone contacts. In order to facilitate differentiation between multiple users on the same VI, individuals talking to the device are profiled based on their speech patterns and/or voice biometrics. This also helps in controlling or restricting content for children (Naidu, 2022). There is also tracking of commands to identify and list their intent for future use. The increase of specific and verified data can be used to provide better targeted advertisements, as well possibly be shared with law enforcement agencies in certain cases. &lt;a class="itht3 TWoY9" href="https://www.business-standard.com/article/current-affairs/razorpay-shared-donor-data-with-police-claims-alt-news-122070501255_1.html" rel="noopener noreferrer" target="_blank"&gt;Recently&lt;/a&gt;, a payment gateway company was made to share customer information to the law enforcement without their customer’s knowledge. This included not just the information about the client but also revealed sensitive personal data of the people who had used the gateway for transactions to the customer. While providing such details are not illegal and companies are meant to comply with requests from law enforcement, if more people knew of the possibility of every conversation of the house being accessible to law enforcement they would make more informed choices of what the VI records.&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Reducing friction in actions desired by the platform&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;One of the fundamental advantages of VIs is that it can reduce several steps to perform an action using a single command. While this is helpful to people interacting with it, the feature can also be used to reduce friction from actions that the platform wants them to take. These actions could include sharing sensitive information, providing consent to further data sharing, and making purchases. An &lt;a class="itht3 TWoY9" href="http://insider.com/kids-alexa-buy-700-worth-of-toys-moms-credit-card-2019-12" rel="noopener noreferrer" target="_blank"&gt;&lt;span class="D-jZk"&gt;example&lt;/span&gt;&lt;/a&gt; of this can be seen where children have found it very easy to purchase items using Alexa (BILD, 2019).&lt;/p&gt;
&lt;h3&gt;&lt;b&gt;Recommendations for Designers and Policymakers&lt;/b&gt;&lt;/h3&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Through these deceptive design patterns, VIs can obstruct and control information according to the preferences of the platform. This can result in a heightened impact on people with less experience with technology. Presently, profitability is a key driving factor for development and design of VI products. There is more importance given to data-based and technical approaches, and interfaces are often conceptualized by people with technical expertise with lack of inputs from designers at the early stages (Naidu, 2022). Designers also focus more on the usability and functionality of the interfaces by enabling personalization, but are often not as sensitive to safeguarding the rights of individuals using them. In order to tackle deceptive design, designers must work towards prioritizing ethical practice, and building in more agency and control for people who use VIs.&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Many of the potential deceptive design patterns can be addressed by designing for accessibility and inclusivity in a privacy preserving manner. This includes vetting third-party apps, providing opt-outs, and clearly communicating privacy notices. Privacy implications can also be prompted by the interface at the time of taking actions. There should be clear notice mechanisms such as a prominent visual cue to alert people when a device is on and recording, along with an easy way to turn off the ‘always listening’ mode. The use of different voice outputs for third party apps can also signal to people about who they are interacting with and what information they would like to share in that context.&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;Training data that covers a diverse population should be built for more inclusivity. A linear and time-efficient architecture is helpful for people with cognitive disabilities. But, this linearity can be offset by adding conversational markers that let the individual know where they are in the conversation (Pearl, 2016). This could address discoverability as well, allowing people to easily switch between different steps. Speech-only interactions can also allow people with vision loss to access the interface with clarity.&lt;/p&gt;
&lt;p class="public-DraftStyleDefault-text-ltr fixed-tab-size public-DraftStyleDefault-block-depth0 bCMSCT yMZv8w lnyWN OZy-3 bCMSCT Y9Dpf xVISr" style="text-align: justify; "&gt;A number of policy documents including the 2019 version of India’s Personal Data Protection Bill, emphasize on the need for privacy by design. But, they do not mention how deceptive design practices could be identified and avoided, or prescribe penalties for using these practices (Naidu, Sheshadri, Mohandas, and Bidare, 2020). In the case of VI particularly, there is a need to look at it as biometric data that is being collected and have related regulations in place to prevent harm to users. In terms of accessibility as well, there could be policies that require not just websites but also apps (including voice based apps) to be compliant with international accessibility guidelines , and to conduct regular audits to ensure that the apps are meeting the accessibility threshold.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy'&gt;https://cis-india.org/internet-governance/blog/deceptive-design-in-voice-interfaces-impact-on-inclusivity-accessibility-and-privacy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Saumyaa Naidu and Shweta Mohandas</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2023-08-08T15:22:51Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
