<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 11 to 25.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/curating-genderlog-indias-twitter-handle"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/owasp-seasides-conference"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data">
    <title>The Wolf in Sheep's Clothing: Demanding your Data</title>
    <link>https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data</link>
    <description>
        &lt;b&gt;The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence (AI) and Machine Learning (ML) has given rise to transformational business models across several sectors.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This piece was originally published in &lt;a class="external-link" href="https://telecom.economictimes.indiatimes.com/tele-talk/the-wolf-in-sheep-s-clothing-demanding-your-data/4497"&gt;The Economic Times Telecom&lt;/a&gt;, on 8 September, 2020.&lt;span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"&gt;&lt;/span&gt;&lt;/p&gt;
&amp;nbsp;
&lt;p&gt;The increasing digitalization of the economy and ubiquity of the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/internet"&gt;Internet&lt;/a&gt;, coupled with developments in &lt;a href="https://telecom.economictimes.indiatimes.com/tag/artificial+intelligence"&gt;Artificial Intelligence&lt;/a&gt;
 (AI) and Machine Learning (ML) has given rise to transformational 
business models across several sectors. These developments have changed 
the very structure of existing sectors, with a few dominant firms 
straddling across many sectors. The position of these firms is 
entrenched due to the large amounts of data they have, and usage of 
sophisticated algorithms that deliver very targeted service/content and 
their global nature.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Such data based network businesses 
are generally multi-sided platforms subject to network effects and 
winner takes all phenomena, often, making traditional competition 
regulation inappropriate. In addition, there has been concern that such 
companies hurt competition as they are owners of large amounts of data 
collected globally, the very basis on which new services are predicated.
 Also since users have an inertia to share their data on multiple 
platforms, new companies find it very challenging to emerge. Several of 
the large companies are of US origin. Several regions/countries such as 
EU, UK, India are concerned that while these companies benefit from the 
data of their citizens or their &lt;a href="https://telecom.economictimes.indiatimes.com/tag/devices"&gt;devices&lt;/a&gt;,
 SMEs and other companies in their own countries find it increasingly 
difficult to remain viable or achieve scale. With the objective of 
supporting enterprises, including SMEs in their own countries, Europe, 
UK India are in different stages of data regulation initiatives.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;In India, the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/personal+data+protection"&gt;Personal Data Protection&lt;/a&gt;
 (PDP) Bill, 2019 deals with the framework for collecting, managing and 
transferring of Personal Data of Indian citizens, including mandating 
sharing of anonymized data of individuals and non-personal data for 
better targeting of services or policy making. In addition, the Report 
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up 
with a Framework for Regulating NPD. Since the NPD Report is a more 
recent phenomenon, this articles analyzes some aspects of it.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;According
 to CoE, non-personal data could be of two types. First, data or 
information which was never about an individual (e.g. weather data). 
Second, data or information that once was related to an individual (e.g.
 mobile number) but has now ceased to be identifiable due to the removal
 of certain identifiers through the process of ‘anonymisation’. However,
 it may be possible to recover the personal data from such anonymized 
data and therefore, the distinction between personal and non-personal is
 not clean. In any case, the PDP bill 2019 deals with personal data. If 
the CoE felt that some aspect of personal data (including anonymized 
data) were not adequately dealt with, it should work to strengthen it. 
The current approach of the CoE is bound to create confusion and 
overlapping jurisdiction. Since anonymized data is required to be 
shared, there are disincentives to anonymization, causing greater risk 
to individual privacy.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;A new class of business based on a “&lt;em&gt;horizontal classification cutting across different industry sectors&lt;/em&gt;” is defined. This refers to any business that derives “&lt;em&gt;new or additional economic value from data, by collecting, storing, processing, and managing data&lt;/em&gt;”
 based on a certain threshold of data collected/processed that will be 
defined by the regulatory authority that is outlined in the report. The 
CoE also recommends that “&lt;em&gt;Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data&lt;/em&gt;” without any remuneration. Further, “&lt;em&gt;By
 looking at the meta-data, potential users may identify opportunities 
for combining data from multiple Data Businesses and/or governments to 
develop innovative solutions, products and services. Subsequently, data 
requests may be made for the detailed underlying data&lt;/em&gt;”.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;With
 increasing digitalization, today almost every business is a data 
business. The problem in such categorization will be with the definition
 of thresholds. It is likely that even a small video sharing app or an 
AR/VR app would store/collect/process/transmit more data than say a 
mid-sized bank in terms of data volumes. Further, with increasing 
embedding of &lt;a href="https://telecom.economictimes.indiatimes.com/tag/iot"&gt;IoT&lt;/a&gt;
 in various aspects of our lives and businesses (smart manufacturing, 
logistics, banking etc), the amount of data that is captured by even 
small entities can be huge.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;The private sector, driven by
 profitability, identifies innovative business models, risks capital and
 finds unique ways of capturing and melding different data sets. In 
order to sustain economic growth, such innovation is necessary. The 
private sector would also like legal protection over these aspects of 
its businesses, including the unique IPR that may be embedded in the 
processing of data or its business processes. But mandating such onerous
 requirements on sharing by the CoE is going to kill any private 
initiative. Any regulatory regime must balance between the need to 
provide a secure environment for protecting data of incumbents and 
making it available to SMEs/businesses.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Meta data 
provides insights to the company’s databases and processes. These are 
source of competitive advantage for any company. Meta data is not 
without a context. The basis of demanding such disclosure is mandated 
with the proposed NPD Regulator who would evaluate such a purpose. In 
practice, purposes are open to interpretation and the structure of 
appeal mechanism etc is going to stall any such sharing. Would such 
mandates of sharing not interfere with the existing Intellectual 
Property Rights? Or the freedom to contract? Any innovation could easily
 be made available to a competitor that front-ends itself with a 
start-up. To mandate making such data available would not be fair. 
Further, how would the NPD regulator even ensure that such data is used 
for the purpose (which the proposed regulator is supposed to evaluate) 
that it is sought for? In Europe, where such &lt;a href="https://telecom.economictimes.indiatimes.com/tag/data+sharing"&gt;data sharing&lt;/a&gt;
 mandates are being considered, the focus is on public data. For private
 entities, the sharing is largely based on voluntary contributions. 
Compulsory sharing is mandated only under restricted situations where 
market failure situations are not addressed through Competition Act and 
provided legitimate interest of the data holder and existing legal 
provisions are taken into account.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further, the 
compliance requirements for such Data Businesses is very onerous and 
makes a mockery of “minimum government” framework of the government. The
 CoE recommends that all Data Businesses, whether government NGO, or 
private “&lt;em&gt;to disclose data elements collected, stored and processed, and data-based services offered&lt;/em&gt;”. As if this was not enough, the CoE further recommends that “&lt;em&gt;Every
 Data Business must declare what they do and what data they collect, 
process and use, in which manner, and for what purposes (like disclosure
 of data elements collected, where data is stored, standards adopted to 
store and secure data, nature of data processing and data services 
provided). This is similar to disclosures required by pharma industry 
and in food products&lt;/em&gt;”. Such disclosures are necessary in these 
industries as the companies in this sector deal with critical aspects of
 human life. But are such requirements necessary for all activities and 
businesses? As long as organizations collect and process data, in a 
legal manner, within the sectoral regulation, why should such 
information have to be “reported”? Further, such bureaucratic processes 
and reporting requirements are only going to be a burden to existing 
legitimate businesses and give rise to a thriving regulatory license 
raj.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further questions that arise are: How is any 
compliance agency going to make sure that all the underlying metadata is
 made available in a timely manner? As companies respond to a dynamic 
environment, their analysis and analytical tools change and so does the 
metadata. This inherent aspect of businesses raises the question: At 
what point in time should companies make their meta-data available? How 
will the compliance be monitored?&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Conclusion: The CoE 
needs to create an enabling and facilitating an environment for data 
sharing. The incentives for different types of entities to participate 
and contribute must be recognized. Adequate provisions for risks and 
liabilities arising out data sharing need to be thought through. 
National initiatives on data sharing should not create an onerous 
reporting regime, as envisaged by the CoE, even if digital.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p class="article-disclaimer"&gt;&lt;em&gt;DISCLAIMER:
 The views expressed are solely of the author and ETTelecom.com does not
 necessarily subscribe to it. ETTelecom.com shall not be responsible for
 any damage caused to any person/organisation directly or indirectly.&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data'&gt;https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rekha Jain</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-11-10T17:44:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy">
    <title>NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy</title>
    <link>https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy</link>
    <description>
        &lt;b&gt;The National Strategy for Artificial Intelligence — a discussion paper on India’s path forward in AI, is a welcome step towards a comprehensive document that reflects the government's AI ambitions. The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/niti-aayog-discussion-paper"&gt;&lt;strong&gt;Download the Report&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India.&lt;/span&gt;&lt;span&gt; It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market.&lt;/span&gt;&lt;span&gt; This blog is a follow up to the comments made by CIS on Twitter&lt;/span&gt;&lt;span&gt; on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Identified Focus Areas for AI Intervention&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India.&lt;/span&gt;&lt;span&gt; Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on  manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts &lt;/span&gt;&lt;span&gt;on Lethal Autonomous Weapons Systems (LAWS) in 2018.&lt;/span&gt;&lt;span&gt; Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as  aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. &lt;/span&gt;&lt;span&gt;This should have been a focus point in the paper as it  aims to identify applications with maximum social impact and inclusion.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In its vision for the use of AI in smart cities, the&lt;/span&gt;&lt;span&gt; paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety.&lt;/span&gt;&lt;span&gt; This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI.&lt;/span&gt;&lt;span&gt; Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Harnessing the Power of AI and Accelerating Research&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised.&lt;/span&gt;&lt;span&gt; In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of  COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence).&lt;/span&gt;&lt;span&gt; However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI.&lt;/span&gt;&lt;span&gt; Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component.&lt;/span&gt;&lt;span&gt; The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes&lt;/span&gt;&lt;span&gt; would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Ethics, Privacy, Security and Safety&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force.&lt;/span&gt;&lt;span&gt; Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Transparency&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The section on transparency and opening the Black Box has several lacunae.&lt;/span&gt;&lt;span&gt; First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act,&lt;/span&gt;&lt;span&gt; which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI&lt;/span&gt;&lt;span&gt; alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes)  vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation.&lt;/span&gt;&lt;span&gt; It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Fairness&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes.&lt;/span&gt;&lt;span&gt; However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Self-certification&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Certification by a self-regulatory body&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Discrimination impact assessments&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Investigations by the privacy regulator &lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;Such tools will proactively need to ensure&lt;/span&gt;&lt;span&gt; inclusion, diversity, and equity in composition and decisions.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data  , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google&lt;/span&gt;&lt;span&gt; are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Privacy&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data  (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy).&lt;/span&gt;&lt;span&gt; In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy.  Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee.&lt;/span&gt;&lt;span&gt; However CIS has pointed out that these principles are generic and not specific to data protection.&lt;/span&gt;&lt;span&gt; Moreover, the law chapter of IEEE’s ‘&lt;/span&gt;&lt;em&gt;&lt;span&gt;Global Initiative on Ethics of Autonomous and Intelligent Systems’&lt;/span&gt;&lt;/em&gt;&lt;span&gt; has been ignored in favor of the chapter on ‘&lt;/span&gt;&lt;em&gt;&lt;span&gt;Personal Data and Individual Access Control in Ethically Aligned Design&lt;/span&gt;&lt;/em&gt;&lt;span&gt;’&lt;/span&gt;&lt;span&gt; as the recommended international standard.&lt;/span&gt;&lt;span&gt; Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;AI Regulation and Sectoral Standards&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector.  However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks.&lt;/span&gt;&lt;span&gt; Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner.&lt;/span&gt;&lt;span&gt; That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Security and Safety of AI Systems&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the  explainability of AI.&lt;/span&gt;&lt;span&gt; Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems.&lt;/span&gt;&lt;span&gt; Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Research and Education&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges,&lt;/span&gt;&lt;span&gt; the government could also ensure that ethics and rights are  part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Data Standards and Data Sharing&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly.&lt;/span&gt;&lt;span&gt; Yet  this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”&lt;/span&gt;&lt;span&gt;, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;An AI Garage for Emerging Economies&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The discussion paper also qualifies India as an “ideal test-bed”&lt;/span&gt;&lt;span&gt; for trying out AI related solutions. This is problematic since questions of regulation in  India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The blanket recommendations were provided without looking at its viability in each sector.&lt;/span&gt;&lt;span&gt; Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.&lt;/span&gt;&lt;span&gt;Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy. &lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy'&gt;https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Sunil Abraham, Elonnai Hickok, Amber Sinha, Swaraj Barooah, Shweta Mohandas, Pranav M Bidare, Swagam Dasgupta, Vishnu Ramachandran and Senthil Kumar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-06-13T13:08:47Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age">
    <title>Ethical Data Design Practices in the AI (Artificial Intelligence) Age</title>
    <link>https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age</link>
    <description>
        &lt;b&gt;Shweta Mohandas was a panelist at discussion on Ethical Data Design Practices in the AI (Artificial Intelligence) Age, organised by Startup Grind, Bangalore on July 28, 2018 at NUMA Bangalore. &lt;/b&gt;
        &lt;h2&gt;Agenda&lt;/h2&gt;
&lt;p&gt;&lt;b&gt;Ethical Data Design Practices in the Age&lt;/b&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;The panel discussion is intended to explore the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.&lt;/p&gt;
&lt;p dir="ltr"&gt;Discussion centred around how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand current thinking by the AI community on ethics and morality in computing and the challenges it presents. &lt;/li&gt;
&lt;li&gt;Explore examples of the ethical choices that products make now and will make in the near future.&lt;/li&gt;
&lt;li&gt;Learn how designers might approach designing experiences that face moral dilemmas.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age'&gt;https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-08-01T23:14:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines">
    <title>Ethics and Human Rights Guidelines for Big Data for Development Research</title>
    <link>https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines</link>
    <description>
        &lt;b&gt;This is a four-part review of guideline documents for ethics and human rights in big data for development research. This research was produced as part of the Big Data for Development network supported by International Development Research Centre, Canada&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Part #1 - Review of Principles of Ethics in Biomedical Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/biomedicalscience" class="internal-link" title="CIS_BD4D_Guideline01_MS+AS_BiomedicalScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #2 - Review of Principles of Ethics in Computer Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/computerscience" class="internal-link" title="CIS_BD4D_Guideline02_RS+AS_ComputerScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #3 - Summary of Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/AIEthicsReview" class="internal-link" title="CIS_BD4D_Guideline03_AS+PT_BigDataAIEthicsReview_SummaryNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #4 - Extended Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/ExtendedNotes" class="internal-link" title="CIS_BD4D_Guideline04_PT+PB_BigDataAIEthicsReview_ExtendedNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The rapid expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “big data”; though there is no single agreed upon definition of the term. Big data promises to provide new insights and solutions across a wide range of sectors. Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. The predecessor disciplines of data science such as computer sciences, applied mathematics, and statistics have traditionally managed to stay out of the scope of ethical frameworks, based on the assumption that they do not involve humans as subject of their research. While critical study into big data is still in its infancy, there is a growing belief that there are significant discontinuities between the rapid growth in big data and the ethical framework that exists to govern its use. In this set of documents, we look at them in detail.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines'&gt;https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Manjri Singh, Rajashri Seal, Pranav Bhaskar Tiwari, Pranav M Bidare</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>BD4D</dc:subject>
    
    
        <dc:subject>RAW Research</dc:subject>
    
    
        <dc:subject>Big Data for Development</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-05-20T07:56:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad">
    <title>New intermediary guidelines: The good and the bad </title>
    <link>https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</link>
    <description>
        &lt;b&gt;In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. &lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This article originally appeared in the Down to Earth &lt;a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693"&gt;magazine&lt;/a&gt;. Reposted with permission.&lt;/p&gt;
&lt;p&gt;-------&lt;/p&gt;
&lt;p&gt;The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.&lt;/p&gt;
&lt;p&gt;These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.&lt;/p&gt;
&lt;p&gt;The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.&lt;/p&gt;
&lt;p&gt;These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.&lt;/p&gt;
&lt;p&gt;Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What changed for the good?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.&lt;/p&gt;
&lt;p&gt;Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.&lt;/p&gt;
&lt;p&gt;Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.&lt;/p&gt;
&lt;p&gt;The new rules, therefore, envisage three types of entities:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.&lt;/li&gt;&lt;li&gt;There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.&lt;/li&gt;&lt;li&gt;The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.&lt;/p&gt;
&lt;p&gt;I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.&lt;/p&gt;
&lt;p&gt;Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.&lt;/p&gt;
&lt;p&gt;One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression&lt;/li&gt;&lt;li&gt;The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.&lt;/p&gt;
&lt;p&gt;This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What should go?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.&lt;/p&gt;
&lt;p&gt;In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.&lt;/p&gt;
&lt;p&gt;It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.&lt;/p&gt;
&lt;p&gt;Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.&lt;/p&gt;
&lt;p&gt;Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The author would like to thank Gurshabad Grover for his editorial suggestions.&amp;nbsp;&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'&gt;https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>TorShark</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>IT Act</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-03-15T13:52:46Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare">
    <title>Roundtable on Artificial Intelligence &amp; Healthcare</title>
    <link>https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare</link>
    <description>
        &lt;b&gt;Centre for Internet &amp; Society (CIS) is organizing a roundtable on artificial intelligence (AI) and healthcare at 'The Energy and Resources Institute' (TERI) in Bengaluru on November 30, 2017 from 2 p.m. to 5 p.m. The roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies in the Indian healthcare sector.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The Indian healthcare industry, powered by Artificial Intelligence, is moving into a new era of increased innovation and independence. With multiple new healthcare start-ups and large ICT companies such as Microsoft, IBM, and Google offering AI solutions to healthcare challenges in the country, it is evident that AI is attempting to enhance the accessibility, affordability, quality and awareness of healthcare in India. Major target areas sought to be enhanced by use of AI in healthcare include addressing the uneven ratio of skilled doctors to patients and making doctors more efficient at their jobs, delivery of personalized and high-quality healthcare to rural areas, and training doctors and nurses in complex procedures.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through the application of machine learning, data mining, natural language processing (NLP), and advanced analytics, AI can help doctors in speedy diagnosis of diseases. AI is also mobilised as ‘smart advisors’ or virtual humans who are capable of making informed decisions by better comprehending data and information through sensing interfaces and analytics, in various forms.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some of these forms include ‘customer service agents’ that can expedite simple tasks like appointment scheduling, or more complex decisions like selecting health plan benefits, ‘clinicians’ that can help with primary screening in understaffed rural areas possibly substituting for human labour, and ‘cognitive agents’ that can efficiently manage existing clinical knowledge alongside physicians, nurses and researchers, thereby reducing the cognitive load on humans. AI based Indian healthcare start-ups such as SigTuple, Aindra, Ten3T, Touchkin and many others are offering a range of solutions including automation of medical diagnosis, automated analysis of medical tests, detection and screening of diseases, wearable sensor based medical devices and monitoring equipment, patient management systems, predictive healthcare diagnosis and disease prevention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, AI in healthcare raises many potential concerns, a common one being the lack of comprehensive, representative, interoperable, and clean data - a challenge that is beginning to be addressed through the Electronic Health Records Standards developed by the Ministry of Health and Family Welfare in 2016 by the Ministry of Health and Family Welfare. Other major challenges include patient adoption and the need for personal interaction with doctors, concerns over mass-scale job losses, distrust in technology, and ethical concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is imperative to note that implementing AI in healthcare, which is bound to disrupt it, does not imply replacing doctors but augmenting their efforts to create a more efficient healthcare landscape in the country. A harmonious collaboration of man and machine is expected to bring about a meaningful and long-lasting impact and stakeholders should be prepared to adapt to this change and the challenges that come with it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 style="text-align: justify; "&gt;Roundtable Agenda&lt;/h3&gt;
&lt;p dir="ltr"&gt;&lt;span&gt;Thursday, November 30, 2017, 2:00pm - 5:00pm &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;span&gt;2:00 - 2:30: Introduction and setting the scene &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;span&gt;2:30 - 3:30: Discussion on the AI landscape in health in India: &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Manner and extent of integration of AI into products/services of healthcare companies.&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Relevant stakeholders and their roles in implementing AI into products/services of healthcare companies.&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Future of AI and related technologies in the healthcare sector&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;3:30 - 4:30: Discussion on challenges and solutions towards regulating AI in India: &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li dir="ltr" style="list-style-type:disc; "&gt;&lt;span&gt;Challenges faced in the conception and implementation of the AI product/service, and reasons for such challenges.&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li dir="ltr" style="list-style-type:disc; "&gt;&lt;span&gt;Regulatory provisions for implementation of AI in healthcare products/services under the existing laws, and need for reforms.&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li dir="ltr" style="list-style-type:disc; "&gt;&lt;span&gt;Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions. &lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/a-i-and-manufacturing-and-services"&gt;Click to download the invite&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare'&gt;https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    

   <dc:date>2018-01-02T13:49:14Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/curating-genderlog-indias-twitter-handle">
    <title>Curating Genderlog India's Twitter handle</title>
    <link>https://cis-india.org/internet-governance/news/curating-genderlog-indias-twitter-handle</link>
    <description>
        &lt;b&gt;Shweta Mohandas has been nominated to curate Genderlog's Twitter handle (@genderlogindia).&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Shweta Mohandas &lt;span&gt;will be tweeting about topics related to gender and data, more specifically around AI, big data, privacy and surveillance. To view the tweets, &lt;a class="external-link" href="https://twitter.com/genderlogindia/status/1127892055231873024"&gt;click here&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/curating-genderlog-indias-twitter-handle'&gt;https://cis-india.org/internet-governance/news/curating-genderlog-indias-twitter-handle&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Big Data</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-05-14T14:40:08Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face">
    <title>Society 5.0 and Artificial Intelligence with a Human Face</title>
    <link>https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face</link>
    <description>
        &lt;b&gt;On 10 May 2019 Radhika Radhakrishnan attended a stakeholder's roundtable consultation on "Society 5.0 and Artificial Intelligence with a Human Face", organized by the Indian Council for Research on International Economic Relations (ICRIER) at India Habitat Centre, New Delhi. The event aimed to chart a roadmap for India’s participation at the G20, under the Japanese Presidency.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The agenda can be &lt;a class="external-link" href="http://icrier.org/newsevents/seminar-details/?sid=460"&gt;found here&lt;/a&gt;. Radhika's inputs were primarily focused on the feminist and gender implications of publicly deployed AI models, challenges and opportunities for academic AI-focused research in the Global South, recommendations for AI capacity building and skilling in the Global South, and regulation of black-box AI.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face'&gt;https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-05-14T14:51:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative">
    <title>Artificial Intelligence and Data Initiative</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative</link>
    <description>
        &lt;b&gt;On 3 May 2019 Arindrajit Basu attended a meeting of the Artificial Intelligence and Data Initiative held at IIC in Delhi. I am a member of the Working Group and co-authoring a report with Anindya Chaudhuri of Global Development Network on the prospect of collaborations in Public uses of AI.&lt;/b&gt;
        &lt;p&gt;The agenda can be &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-and-data-initiative"&gt;viewed here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-05-14T15:06:02Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light">
    <title>Insult to Kannada shows Google AI in a poor light</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</link>
    <description>
        &lt;b&gt;A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
&lt;/b&gt;
        &lt;p&gt;The article by Krupa Joseph was &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html"&gt;published in Deccan Herald&lt;/a&gt; on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Fallible process&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Periodic checks&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Who was upto mischief?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Furore and after&lt;/h3&gt;
&lt;p&gt;‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”&lt;/p&gt;
&lt;p&gt;Google removed the answer and issued a statement:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'&gt;https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Krupa Joseph</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-06-26T05:25:38Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond">
    <title>Fostering Strategic Convergence in US-India Tech Relations: 5G and Beyond</title>
    <link>https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond</link>
    <description>
        &lt;b&gt;The 2019 G-20 summit underscores the importance of fostering strategic convergence in U.S.-India tech relations.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Justin Sherman and Arindrajit Basu was &lt;a class="external-link" href="https://thediplomat.com/2019/07/fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond/"&gt;published in the Diplomat&lt;/a&gt; on July 3, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;As world leaders gathered for the G-20 summit in Osaka, Japan this past weekend, a multitude of issues from climate to trade to technology came to the fore. Much of the focus was on U.S.-China interactions at the summit, as the two nations are  locked in both a trade war and broader technological and geopolitical competition. Despite the present focus on the U.S. and China, however, it is crucial to not overlook another bilateral relationship of ever-growing importance in the process: The tech relationship between the United States and India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Certainly, the two countries have many disagreements on some technology issues. But this is a geopolitical relationship that is both strategically important for each country, and a vital opportunity for the two largest democracies in the world to collectively combat Chinese-style digital authoritarianism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Huawei and 5G&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;First, with respect to national security and 5G roll-outs, the U.S and India are not on the same page. The United States, for several months now, has been on a &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;diplomatic messaging tour&lt;/a&gt; of the world to try to convince — with great resistance (some would argue failure) — allies, partners, and potential partners alike to ban Chinese firm Huawei from supplying components of 5G networks. Many officials across Europe, the Middle East, South America, and elsewhere have been reluctant to ban Huawei per the U.S. recommendation, and India is no exception. Indeed, National Security Advisory Board Chairman P.S. Raghavan &lt;a href="https://www.thehindu.com/news/national/on-5g-and-data-india-stands-with-developing-world-not-us-japan-at-g20/article28207169.ece/amp/?__twitter_impression=true" target="_blank"&gt;told&lt;/a&gt; &lt;em&gt;The Hindu&lt;/em&gt; that “5G is becoming a fault line in the technology cold war between world powers” and that India must avoid getting caught in these fault lines.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In large part, U.S. diplomatic messaging here has fallen short due to &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;heavy conflations&lt;/a&gt; of national security- and trade-related risks; and Trump only contributed further to this fact with his latest &lt;a href="https://twitter.com/JenniferJJacobs/status/1145072073800183808" target="_blank"&gt;reference&lt;/a&gt; to Huawei, during the G-20, as a potential trade war bargaining chip. The sheer population of India, however, combined with its fast growing technology sectors and &lt;a href="http://www.cmai.asia/digitalindia/" target="_blank"&gt;desire to digitize&lt;/a&gt;, makes the country an important market player when it comes to the 5G revolution. U.S.-India engagement on 5G issues must be managed effectively through robust articulation of each country’s national interests underscored by a clean segregation of trade and security questions in the discussion. This partnership has the potential to wield great influence in the global market, including in ways that could prioritize or deprioritize certain 5G equipment suppliers (like Huawei).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data Sovereignty and Data Privacy&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sovereignty is another hot area in which the U.S.-India tech relationship demands careful negotiation. Over the past year, the Indian government has &lt;a href="https://twitter.com/cis_india/status/1143096429298085889" target="_blank"&gt;introduced a range of policy instruments&lt;/a&gt; which dictate that certain kinds of data must be stored in servers located physically within India — termed “&lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;data localization&lt;/a&gt;.” While there are &lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;a number of policy objectives&lt;/a&gt; this gambit ostensibly seeks to serve, the two which stand out are (1) the presently cumbersome process for Indian law enforcement agencies to access data stored in the U.S. during criminal investigations, and (2) extractive economic models used by U.S. companies operating in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A range of conflicting developments emerging from the G-20 summit underscore this fact. India, along with the BRICS grouping, &lt;a href="https://mea.gov.in/bilateral-documents.htm?dtl/31506/Joint+Statement+on+BRICS+Leaders+Informal+Meeting+on+the+margins+of+G20+Summit" target="_blank"&gt;focused&lt;/a&gt; on the development dimensions of data governance and re-emphasized the need for &lt;a href="https://www.youtube.com/watch?v=0a8YsZQ0F6k&amp;amp;feature=youtu.be" target="_blank"&gt;data sovereignty&lt;/a&gt; — broadly understood as the sovereign right of nations to govern data in their national interest for the welfare of their citizens. President Trump &lt;a href="https://www.whitehouse.gov/briefings-statements/remarks-president-trump-g20-leaders-special-event-digital-economy-osaka-japan/" target="_blank"&gt;reigned in his focus&lt;/a&gt; on the need for cross-border data flows and, in direct opposition to some proposals that have emerged from India, explicitly opposed data localization. While India did not sign the &lt;a href="https://www.international.gc.ca/world-monde/international_relations-relations_internationales/g20/2019-06-29-g20_declaration-declaration_g20.aspx?lang=eng" target="_blank"&gt;Osaka Declaration on the Digital Economy&lt;/a&gt; that promoted cross-border data flows, the importance of cross-border data flows in spurring the global economy did find its way into the &lt;a href="https://g20.org/pdf/documents/en/FINAL_G20_Osaka_Leaders_Declaration.pdf" target="_blank"&gt;Final G-20 Leaders Declaration&lt;/a&gt; — which, of course, both countries signed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Geopolitically, the importance of India’s data governance stance cannot be overstated as it could pave the way for the approach adopted by other emerging economies — most notably the BRICS countries. Likewise, the U.S. has important thinking to do around such questions as what shape a national data privacy law could take. Even though the two countries’ views on data may be quite different from one another, the seats that India and the U.S. have at the table for &lt;a href="https://www.theatlantic.com/international/archive/2019/06/g20-data/592606/" target="_blank"&gt;global data governance discussions&lt;/a&gt; — alongside others like Japan, China, and the European Union — underscore the value of meaningful interactions and mutual trust and respect on this issue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Norms for a Democratic Digital Future&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, as the &lt;a href="https://www.un.org/disarmament/ict-security/" target="_blank"&gt;United Nations Group of Governmental Experts&lt;/a&gt; and the &lt;a href="https://www.un.org/disarmament/open-ended-working-group/" target="_blank"&gt;Open-Ended Working Group&lt;/a&gt; meet to resurrect the norm-formulation process for fostering responsible state behavior in cyberspace, India has some homework to do.  Even though it has been a member of five out of the six Group of Governmental Experts set up thus far, India is yet to come out with a public statement delineating its views on the applicability of International Law applies in cyberspace. Further, India has also failed to articulate a cohesive digital strategy — instead relying on a patchwork of hastily rolled out and often ill-conceived regulatory policies, some of which commentators in the West &lt;a href="https://www.nytimes.com/2019/02/14/technology/india-internet-censorship.html" target="_blank"&gt;have hastily labeled&lt;/a&gt; as digital authoritarianism. The U.S., for its part, amidst a &lt;a href="https://www.newamerica.org/cybersecurity-initiative/c2b/c2b-log/four-opportunities-for-states-new-cyber-bureau/" target="_blank"&gt;cutback&lt;/a&gt; to diplomatic cyber engagement (as part of cutbacks to diplomacy writ large), could also up its support of international engagement on these issues. Its recent repeal of net neutrality protections could also be argued as a step back from long-time international &lt;a href="https://d1y8sb8igg2f8e.cloudfront.net/documents/The_Idealized_Internet_vs._Internet_Realities_Version_1.0_2018-07-25_203930.pdf" target="_blank"&gt;norm promotion&lt;/a&gt; around internet openness.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through a combination of domestic policy gambits and foreign policy maneuvers, both states need to draw lines in the sand that safeguard human rights, international law, and democracy online, while arriving at some balance with each other’s national interests.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A primary example lies with artificial intelligence (AI). AI has found increasing use in digital authoritarianism, as dictators use automated, intelligent systems to boost their surveillance capabilities. The Chinese government has arguably been at the &lt;a href="https://freedomhouse.org/report/freedom-net/freedom-net-2018" target="_blank"&gt;forefront&lt;/a&gt; of this enhanced level of authoritarian rule for the digital age.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In addition to &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-government-outlines-ai-ambitions-through-2020/" target="_blank"&gt;focusing&lt;/a&gt; on AI applications for everything from natural language processing to self-driving cars — through investments, strategies, policy documents, and so on — Beijing has also been &lt;a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank"&gt;deploying&lt;/a&gt; AI in the service of large-scale human-rights abuses. Chinese strategy papers on AI, while similarly emphasizing many commercial or benign applications and raising attention to such issues as algorithmic fairness, concurrently have &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/online-symposium-chinese-thinking-ai-security-comparative-context/" target="_blank"&gt;discussed&lt;/a&gt; using AI for “social governance,” censorship, and surveillance. To combat the rising intersection of AI and digital authoritarianism, the U.S. and India could wield enormous leverage — as the two largest democracies in the world — in governing these technologies in a democratic fashion that counters &lt;a href="https://www.newamerica.org/cybersecurity-initiative/reports/essay-reframing-the-us-china-ai-arms-race/" target="_blank"&gt;dangerous arms-race narratives&lt;/a&gt; and uses of AI for surveillance and repression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The same goes for paying attention to technology exports and diffusion to human-rights abusers. For instance, companies incorporated in China, among those incorporated elsewhere, have been &lt;a href="https://www.cfr.org/blog/authoritarians-are-exporting-surveillance-tech-and-it-their-vision-internet" target="_blank"&gt;heavily involved&lt;/a&gt; in exports of dual-use surveillance technologies to other countries, including those with questionable or outright poor human-rights records. Although companies incorporated in democracies may engage in such practices as well, most democracies take steps to curtail these practices as much as possible, such as through the multilateral Wassenaar Arrangement — which lays out export controls around conventional weapons and dual-use goods and technologies. The U.S. has long been a party to this agreement, and India &lt;a href="https://economictimes.indiatimes.com/news/defence/wassenaar-arrangement-decides-to-make-india-its-member/articleshow/61975192.cms?from=mdr" target="_blank"&gt;officially joined&lt;/a&gt; in 2018. Arguments persist about the extent to which Beijing is involved in these dual-use surveillance technology exports, but these exports may only increase going forward as companies &lt;a href="https://www.newamerica.org/weekly/edition-254/long-view-digital-authoritarianism/" target="_blank"&gt;increasingly&lt;/a&gt; sell not just internet surveillance tools but also dual-use AI tools. In this way, too, India and the U.S. could play an important role in countering the spread of such capabilities to human-rights abusers and standing against the spread of digital authoritarianism in the process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The relationship here is, therefore, one that requires careful navigation for its significant geopolitical, economic, and ideological consequences. For the future of the technological relationship between the world’s largest democracies—and the extent to which they respect each other’s strategic autonomy while converging on issues of mutual interest—could determine the future of global digital governance.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond'&gt;https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Justin Sherman and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-05T02:19:09Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development">
    <title>The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development</title>
    <link>https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society (CIS) submitted its comments and recommendations on the Report on AI Governance Guidelines Development.&lt;/b&gt;
        
&lt;p&gt;With research assistance by Anuj Singh&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;I. Background&lt;/h2&gt;
&lt;p&gt;On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document. &lt;br /&gt;&lt;br /&gt;CIS’ comments are inline with the submission guidelines,&amp;nbsp; we have provided both comments and suggestions based on the headings and text provided in the report.&lt;/p&gt;
&lt;h2&gt;II. Governance of AI&lt;/h2&gt;
&lt;p&gt;The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now &lt;strong&gt;“&lt;/strong&gt;perform complex tasks without active human control or&amp;nbsp; supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the &lt;a href="https://oecd.ai/en/dashboards/ai-principles/P6"&gt;OECD AI principles &lt;/a&gt;which this report draws inspiration from.&lt;/p&gt;
&lt;h3&gt;A. AI Governance Principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;A proposed list of AI Governance principles (with their explanations) is given&amp;nbsp; below. &lt;/strong&gt;&lt;br /&gt;While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in&amp;nbsp; mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken,&amp;nbsp; to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.&lt;/p&gt;
&lt;h3&gt;B. Considerations to operationalise the principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;1. Examining AI systems using a lifecycle approach &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. &lt;a href="https://www.sciencedirect.com/org/science/article/pii/S1438887123002224"&gt;Chen et al. (2023&lt;/a&gt;), &lt;a href="https://www.cell.com/patterns/pdfExtended/S2666-3899(22)00074-5"&gt;De Silva and Alahakoon (2022)&lt;/a&gt;) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others &lt;a href="https://www.sciencedirect.com/science/article/pii/S2666389922000745"&gt;(Ng et al. (2022)&lt;/a&gt; have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s&amp;nbsp; &lt;a href="https://nasscom.in/ai/pdf/the-developer%27s-playbook-for-responsible-ai-in-india.pdf"&gt;Responsible AI Playbook&lt;/a&gt; follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Taking an ecosystem-view of AI actors &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body,&amp;nbsp; it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static. &lt;br /&gt;&lt;br /&gt;While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Leveraging technology for governance &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the&lt;a href="https://www.techinasia.com/indian-state-running-pilot-put-land-records-blockchain"&gt; Bhumi programme&lt;/a&gt; in Andhra Pradesh that used blockchain for land records,&amp;nbsp; however this also led to the weakening of local institutions, and also led to exclusion of marginalised people &lt;a href="https://www.tandfonline.com/doi/full/10.1080/01436597.2021.2013116"&gt;Kshetri (2021)&lt;/a&gt;. It was also stated that there was a need to strengthen existing institutions before using a technological measure. &lt;br /&gt; &lt;strong&gt;&lt;br /&gt; &lt;/strong&gt;Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack &lt;a href="https://indiaai.gov.in/news/chatgpt-fails-to-clear-the-prestigious-civil-service-examination"&gt;tough exams in the USA.&lt;/a&gt; In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of &lt;a href="https://www.thehindu.com/sci-tech/technology/indias-finance-ministry-asks-employees-to-avoid-ai-tools-like-chatgpt-deepseek/article69183180.ece"&gt;confidential information.&lt;/a&gt; &lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.&lt;/p&gt;
&lt;h2&gt;III. GAP ANALYSIS&lt;/h2&gt;
&lt;h3&gt;A. The need to enable effective compliance and enforcement of existing laws.&lt;/h3&gt;
&lt;p&gt;The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India&amp;nbsp; are robust enough to deal with AI, and the change in technology and technology developers.&lt;/p&gt;
&lt;p&gt;There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.&lt;/p&gt;
&lt;p&gt;We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to &lt;a href="https://www.kas.de/documents/d/politikdialog-asien/panorama_2024-1-107-122"&gt;lack of clarity.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Below we are adding our comments and suggestions certain subsections in this section on &lt;strong&gt;The need to enable effective compliance and enforcement of existing laws &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;1. Intellectual property rights&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;a. Training models on copyrighted data and liability in case of&amp;nbsp; infringement&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making &lt;a href="https://spicyip.com/2019/08/should-indian-copyright-law-prevent-text-and-data-mining.html"&gt;non-expressive uses of work&lt;/a&gt;, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This report states “The Indian law permits a very closed list of activities in using copyrighted data&amp;nbsp; without permission that do not constitute an infringement. Accordingly, it is clear&amp;nbsp; that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act,&amp;nbsp; 1957 is extremely narrow. Commercial research is not exempted; not-for-profit &lt;sup&gt;10&lt;/sup&gt; institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete&amp;nbsp; with the existing copyrighted work is exempted. “ &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research. &lt;br /&gt; &lt;br /&gt; If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate.&amp;nbsp; Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;b. Copyrightability of work generated by using foundation models &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.&lt;/p&gt;
&lt;h3&gt;C. The need for a whole-of-government approach.&lt;/h3&gt;
&lt;p&gt;While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms.&amp;nbsp; The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"&gt;AI in healthcare&lt;/a&gt; indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.&lt;/p&gt;
&lt;p&gt;Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.&lt;/p&gt;
&lt;h2&gt;IV. Recommendations&lt;/h2&gt;
&lt;h3&gt;1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.&lt;/h3&gt;
&lt;p&gt;We would like to reiterate the earlier section and highlight the&amp;nbsp; importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.&lt;/p&gt;
&lt;h3&gt;2.To develop a systems-level understanding of India’s AI&amp;nbsp; ecosystem, MeitY should establish, and administratively house,&amp;nbsp; a Technical Secretariat to serve as a technical advisory body&amp;nbsp; and coordination focal point for the Committee/ Group.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making.&amp;nbsp; The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making,&amp;nbsp; there is also risk of &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4931927"&gt;regulatory capture&lt;/a&gt;. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.&lt;/p&gt;
&lt;h3&gt;3.To build evidence on actual risks and to inform harm mitigation,&amp;nbsp; the Technical Secretariat should establish, house, and operate&amp;nbsp; an AI incident database as a repository of problems&amp;nbsp; experienced in the real world that should guide responses to&amp;nbsp; mitigate or avoid repeated bad outcomes.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt; &lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.&lt;/p&gt;
&lt;p&gt;It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual&amp;nbsp; being denied health insurance because of AI bias.&amp;nbsp; With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical&amp;nbsp; Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to &lt;a href="https://oecd.ai/en/incidents"&gt;The OECD AI Incidents Monitor&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;4. To enhance transparency and governance across the AI&amp;nbsp; ecosystem, the Technical Secretariat should engage the&amp;nbsp; industry to drive voluntary commitments on transparency&amp;nbsp; across the overall AI ecosystem and on baseline commitments&amp;nbsp; for high capability/widely deployed systems.&lt;/h3&gt;
&lt;p&gt;It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.&lt;/p&gt;
&lt;p&gt;While the transparency measures listed will ensure better understanding of processes of&amp;nbsp; AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf"&gt;AI data supply chain &amp;amp; auditability and healthcare in India&lt;/a&gt;, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cis-india.org/home-images/AIGovernanceComments.png" alt="null" class="image-inline" title="AI Governance Comments" /&gt;&lt;/p&gt;
&lt;div class="visualClear"&gt;Source: CIS survey of professionals in AI and healthcare, January- April 2024. Medical professionals (n = 133); healthcare institutions (n = 162); technology companies (n = 171)&lt;/div&gt;
&lt;div class="visualClear"&gt;&amp;nbsp;&lt;/div&gt;
&lt;h3&gt;5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.&lt;/h3&gt;
&lt;p&gt;It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), &lt;a href="https://www.financialexpress.com/life/technology-will-not-rush-in-bringing-digital-india-act-meity-secretary-3708673/"&gt;stated&lt;/a&gt; that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries. &lt;br /&gt; &lt;br /&gt; We would also like to highlight that during the consultations on the DIA it was proposed to replace the &lt;a href="https://vidhilegalpolicy.in/blog/explained-the-digital-india-act-2023/"&gt;Information Technology Act 2000. &lt;/a&gt;It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.&lt;/p&gt;
&lt;h2&gt;&lt;/h2&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development'&gt;https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas, Amrita Sengupta and Anubha Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2025-03-06T06:32:45Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency">
    <title>Towards Algorithmic Transparency</title>
    <link>https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency</link>
    <description>
        &lt;b&gt;This policy brief examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This brief proposes a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggests solutions such as the incorporation of audits, and ex ante approaches to building interpretable models right from the design stage. Read the full report &lt;a href="https://cis-india.org/internet-governance/algorithmic-transparency-pdf" class="internal-link" title="Algorithmic Transparency PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The Regulatory Practices Lab at CIS aims to produce regulatory policy 
suggestions focused on India, but with global application, in an agile 
and targeted manner and to promote transparency around practices 
affecting digital rights. &lt;br /&gt;The Regulatory Practices Lab is supported by Google and Facebook.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency'&gt;https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Radhika Radhakrishnan, and Amber Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Algorithms</dc:subject>
    
    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Transparency</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-07-15T13:16:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/owasp-seasides-conference">
    <title>OWASP Seasides Conference</title>
    <link>https://cis-india.org/internet-governance/news/owasp-seasides-conference</link>
    <description>
        &lt;b&gt;Karan Saini attended the OWASP Seasides security conference held on February 27 and 28, 2019 at Cavelossim, Goa. The event was organized by OWASP Seasides.&lt;/b&gt;
        &lt;p&gt;For conference details &lt;a class="external-link" href="https://www.owaspseasides.com/schedule/workshops"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/owasp-seasides-conference'&gt;https://cis-india.org/internet-governance/news/owasp-seasides-conference&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-07T23:53:47Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india">
    <title> AI for Healthcare: Understanding Data Supply Chain and Auditability in India </title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india</link>
    <description>
        &lt;b&gt;This report aims to understand the prevalence and use of AI auditing practices in the healthcare sector. By mapping the data supply chain underlying AI technologies, the study aims to unpack i) how AI systems are developed and deployed to achieve healthcare outcomes and, ii) how AI audits are perceived and implemented by key stakeholders in the healthcare ecosystem. &lt;/b&gt;
        
&lt;p dir="ltr"&gt;Read our full report &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf" class="internal-link" title="AI for Healthcare: Understanding Data Supply Chain and Auditability in India PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p dir="ltr"&gt;The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.&lt;/p&gt;
&lt;p dir="ltr"&gt;For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.&lt;/p&gt;
&lt;p id="docs-internal-guid-874e64d9-7fff-d16c-ed57-d245c7214bec" dir="ltr"&gt;Our primary research questions are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What auditing practices, if any, are being followed by technology companies and healthcare institutions?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What best practices can organisations based in India adopt to improve AI auditability?&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p id="docs-internal-guid-28d92dc2-7fff-c54b-addb-63beee845252" dir="ltr"&gt;This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance&amp;nbsp; on datasets from the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Improve data management across the AI data supply chain&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt standardised data-sharing policies&lt;/em&gt;. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare. &lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Emphasise not just data quantity but also data quality&lt;/em&gt;. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Streamline AI auditing as a form of governance&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Standardise the practice of AI auditing&lt;/em&gt;. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Build organisational knowledge and inter-stakeholder collaboration&lt;/em&gt;. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Prioritise transparency and public accountability in auditing standards&lt;/em&gt;. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Centre public good in India’s AI industrial policy&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt focused and transparent approaches to investing in and financing AI projects&lt;/em&gt;. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Strengthen regulatory checks and balances for AI governance.&lt;/em&gt;&lt;br /&gt;While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india'&gt;https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta (PI), Shweta Mohandas (Co-PI), (In alphabetical order) Abhineet Nayyar, Chetna VM, Puthiya Purayil Sneha, Yatharth</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Health Tech</dc:subject>
    
    
        <dc:subject>RAW Publications</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2024-11-30T08:17:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
