<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 21 to 35.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/responsible-ai-workshop"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-in-healthcare"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-for-good-workshop"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft">
    <title>Artificial Intelligence: a Full-Spectrum Regulatory Challenge [Working Draft]</title>
    <link>https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
&lt;p&gt;Today, there are certain misconceptions regarding the regulation of AI. Some corporations would like us to believe that AI is being developed and used in a regulatory vacuum. Others in civil society organisations believe that AI is a regulatory circumvention strategy deployed by corporations. As a result, these organisations call for onerous regulations targeting corporations. However, some uses of AI by corporations can be completely benign and some uses AI by the state can result in the most egregious human rights violations. Therefore policy makers need to throw every regulatory tool from their arsenal to unlock the benefits of AI and mitigate its harms.&lt;/p&gt;
&lt;p&gt;This policy brief proposes a granular, full spectrum approach to the regulation of AI depending on who is using AI, who is impacted by that use and what human rights are impacted. Everything from deregulation, to forbearance, to updated regulations, to absolute and blanket prohibitions needs to be considered depending on the specifics. This approach stands in contrast to approaches of ethics, omnibus law, homogeneous principles, and human rights, which will result in inappropriate under-regulation or over-regulation of the sector.&lt;/p&gt;
&lt;p&gt;Find a copy of the working draft &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft'&gt;https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-04T06:10:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/responsible-ai-workshop">
    <title>Responsible AI Workshop</title>
    <link>https://cis-india.org/internet-governance/news/responsible-ai-workshop</link>
    <description>
        &lt;b&gt;Sunil Abraham participated in this meeting organized by Facebook on September 17, 2019 in New Delhi. &lt;/b&gt;
        &lt;p&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/responsible-ai"&gt;Click to view the agenda&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/responsible-ai-workshop'&gt;https://cis-india.org/internet-governance/news/responsible-ai-workshop&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-20T14:50:47Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today">
    <title>Talks at National University of Juridical Sciences Today</title>
    <link>https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today</link>
    <description>
        &lt;b&gt;Arindrajit Basu delivered two lectures at the National University of Juridical Sciences on September 18, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The first one was part of a symposium being conducted by the soon to be set up Intellectual Property and Technology Law Centre. I spoke on "Conceptualising India's Digital Policy Vision" The other speaker today was  Mr. Supratim Chakraborty (Partner, Khaitan&amp;amp;Co.) Tomorrow's speakers are Prof. Mahendra Kumar Bhandan and Nikhil Narendran (Partner, Trilegal)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The past year has  seen vigorous activity on the domestic  data governance policy front in India. Across key issues including intermediary liability, data localisation and e-commerce, the government has rolled out a patchwork of regulatory policies that has resulted in battle lines being drawn by governments, industry and civil society actors both in India and across the globe. The Data Protection Bill is set to be tabled in the next session of Parliament amidst supposed disagreement among policy-makers on key provisions, including data localization. The draft e-commerce policy and Chapter 4 of the  Economic Survey refer to the concepts of ‘community data’ and ‘data as public  good’ respectively. Artifiicial Intelligence is also the new buzz word among policy-making circles and industry players alike.&lt;br /&gt;&lt;br /&gt;The implementation of each of these concepts have important implications for individual privacy, the monetisation of data by (foreign tech companies) and the harnessing of-as the e-commerce policy puts it-India’s data for India’s development. Meanwhile, at international forums such as the G20, India has partnered up with its BRICS allies to emphasize the notion of ‘data sovereignty’ or the right of each country to govern data within its jurisdiction without external interference.&lt;br /&gt;In his talk, Basu unpacked each of these policies and followed up with a discussion on what these developments meant for Indian citizens and for India’s role in the multilateral global order.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second one was on 'Constitutionalizing Artificial Intelligence' conducted by the Constitutional Law Society. Here, I drew from some preliminary findings from a paper I am working on with Elonnai and Amber.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The use of big data and algorithmic decision-making  has been touted world over as a means of augmenting human capacities, removing bureaucratic fetters and benefiting society. Yet, with concerns arising around bias, fairness and a lack of algorithmic accountability, an entirely new domain of discourse on data justice has emerged - underscoring the idea that algorithms not only have the potential to exacerbate entrenched structural inequality but could also create and modulate new forms of injustice for the vulnerable sections of society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;There is a need for a reflexive turn in the debate on data justice that adequately considers the broader narrative and entrenched inequality in the ecosystem. &lt;/span&gt;&lt;span&gt;Transformative constitutionalism is a new brand of scholarship in comparative constitutional law which celebrates the crucial role of the state and the judiciary in bringing about emancipatory change and rooting out structural inequality.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Originally conceptualized as a Global South concept designed as a counter-model to the individual rights-driven model of Northern Constitutions, scholars have now identified emancipatory provisions in several western constitutions such as Germany. India’s constitution is one such example. The origins of constitutional order in India were designed to “bring the alien and powerful machine like that of the state under the control of human will” and to eliminate the inequality of “status, facilities and opportunities.” &lt;br /&gt;&lt;br /&gt;What is the relevance of India's constitutional ethos in the regulation of modern day data driven decision-making? How can policy-makers use constitutional tenets to mitigate structural injustice and transform the bearings of 21st century Indian society?&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today'&gt;https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-20T14:45:35Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-in-healthcare">
    <title>AI in Healthcare</title>
    <link>https://cis-india.org/internet-governance/news/ai-in-healthcare</link>
    <description>
        &lt;b&gt;The Center for Information Technology and Public Policy (CITAPP) and the International Institute of Information Technology Bangalore (IIITB) invited Radhika Radhakrishnan for a talk at IIIT-Bangalore on September 13, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In her talk, she  critically questioned the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in India (including the private sector, non-profits, and the Indian State) from a feminist standpoint. Specific to healthcare in India, such a narrative has been employed towards solving development challenges (such as a shortage of medical practitioners in remote regions of the country) through the introduction of AI applications targeted towards the sick-poor. Through her research and fieldwork, she analysed the layers of expropriation and experimentation that come into play when AI technologies become a method of using 'diverse' bodies and medical records of the sick-poor as ‘data’ to train proprietary AI algorithms at a low cost in the absence of effective State regulatory mechanisms. She argued that structural challenges (such as lack of incentives for medical practitioners to join public healthcare) get reframed into opportunities to substitute labour (people) by capital (technology) through innovation of “spectacular technologies” such as AI. Throughout the talk, she also highlighted the methodologies she used to conduct this research.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-in-healthcare'&gt;https://cis-india.org/internet-governance/news/ai-in-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-19T16:15:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy">
    <title>Policies for the Platform Economy</title>
    <link>https://cis-india.org/internet-governance/news/policies-for-the-platform-economy</link>
    <description>
        &lt;b&gt;Anubha Sinha and Amber Sinha will be panelists in this event being organized by IT for Change at India Habitat  Centre in New Delhi on August 30, 2019. &lt;/b&gt;
        &lt;p&gt;The agenda for the event &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/agenda-for-policies-for-the-platform-economy"&gt;is here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/policies-for-the-platform-economy'&gt;https://cis-india.org/internet-governance/news/policies-for-the-platform-economy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-27T00:19:26Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes">
    <title>Impact of Industrial Revolution 4.0 - IT and Automotive Sector in India by the Dialogue and FES</title>
    <link>https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes</link>
    <description>
        &lt;b&gt;On August 21, 2019, Aayush Rathi, attended a report launch event and focus group discussion on the "Impact of Industrial Revolution 4.0 - IT and Automotive Sector in India". Research conducted by the Dialogue in collaboration with the Friedrich-Ebert-Stiftung (FES) were being presented. &lt;/b&gt;
        &lt;p class="moz-quote-pre" style="text-align: justify; "&gt;At CIS, we have previously produced research on the future of work in these sectors. Aayush attended the event to understand how other researchers are approaching the subject of the future of work in terms of the methodological approach and the questions being asked and policy responses being proposed. In what may be treated as validation of our research design, FES and the Dialogue have addressed similar questions and adopted an empirical+desk based approach to do so as well.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes'&gt;https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-27T00:13:32Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance">
    <title>Emergence of Chinese Technology:Rising stakes for innovation, competition and governance</title>
    <link>https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance</link>
    <description>
        &lt;b&gt;Omidyar Network in partnership with the Esya Centre organized a private discussion on the theme “Emergence of Chinese technology - rising stakes for innovation, competition and governance” on Monday, 12 August 2019 in New Delhi. Arindrajit Basu attended the event. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;China Ascendant: Soft Power report by ON focuses on three prongs of power-digital power, fore power and sharp power. Standards have been a major avenue for proliferation of Chinese competition.This is combined with knowledge transfer as 2.8 million Chinese students in the US have largely returned to tech companies in China. Core strength is still not in basic research so by 2020, aiming for 15 per cent of PhD.s to be in basic research. China uses nudges in shaping global governance outcomes by targeting the right stakeholders as opposed to altering the ground rules entirely,  Universities in China have focused on how cultural connections can be linked upto negotiating prowess at multilateral fora.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;China takes a whole of government approach to technology innovation. Continues to be consumer focused.&lt;/li&gt;
&lt;li&gt;China does not look at India as a R+D partner,more as a market.Stability and unpredictability has been an issue.None of India's tech policies were drafted with China in mind.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance'&gt;https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-19T14:03:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india">
    <title>Rethinking the intermediary liability regime in India</title>
    <link>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</link>
    <description>
        &lt;b&gt;The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December. 

&lt;/b&gt;
        &lt;p&gt;The blog post by Torsha Sarkar was &lt;a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/"&gt;published by CyberBRICS&lt;/a&gt; on August 12, 2019.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 style="text-align: justify; "&gt;Introduction&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft&lt;a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf"&gt;version&lt;/a&gt; of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters &lt;a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank"&gt;reported&lt;/a&gt; that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary&lt;a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank"&gt;statement of reasons&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including &lt;a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank"&gt;Google&lt;/a&gt;) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In March this year, Mark Zuckerberg wrote an&lt;a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank"&gt;op-ed&lt;/a&gt; for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious &lt;a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank"&gt;NetzDG&lt;/a&gt; aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Broad, thematic concerns with the Rules&lt;/h3&gt;
&lt;h3 style="text-align: justify; "&gt;A uniform mechanism of compliance&lt;/h3&gt;
&lt;h3 style="text-align: justify; "&gt;Timelines&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Rule 3(8) of the Guidelines mandates intermediaries, prompted by &lt;em&gt;a&lt;/em&gt; &lt;em&gt;court order or a government notification&lt;/em&gt;, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for &lt;em&gt;all&lt;/em&gt; intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has &lt;a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank"&gt;commented&lt;/a&gt;, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can &lt;a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank"&gt;regulation be made&lt;/a&gt; with only the tech incumbents in mind. While platforms like YouTube can comfortably &lt;a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank"&gt;remove&lt;/a&gt; criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay &lt;a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank"&gt;less heed&lt;/a&gt; to the technical requirements or the correct legal procedures required for content takedown.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Utilization of ‘automated technology’&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping &lt;a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank"&gt;100 million dollars investment by 2018&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not &lt;a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank"&gt;infallible&lt;/a&gt;. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through &lt;em&gt;all&lt;/em&gt; these categories of ‘unlawful content’ at one go.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;The problems of AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank"&gt;Except, that is not how it should work&lt;/a&gt;. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be&lt;a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&amp;amp;httpsredir=1&amp;amp;article=1704&amp;amp;context=ndlr" rel="noreferrer noopener" target="_blank"&gt;understandable&lt;/a&gt; by a machine.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Additionally, the training data used to feed the system &lt;a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank"&gt;can be biased&lt;/a&gt;. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘&lt;a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank"&gt;black box&lt;/a&gt;’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;What can be done?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The &lt;a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank"&gt;UK White Paper on Harms&lt;/a&gt;, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly&lt;a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank"&gt;arbitrate&lt;/a&gt; disputes regarding content removal, is finding traction in several parallel developments.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘&lt;a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank"&gt;internet court&lt;/a&gt;’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is no&lt;a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"&gt; silver bullet&lt;/a&gt; when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and &lt;em&gt;laissez-faire&lt;/em&gt;autonomy, is worth considering.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'&gt;https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>torsha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-16T01:49:47Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward">
    <title>Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward</title>
    <link>https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward</link>
    <description>
        &lt;b&gt;On July 13, 2019, Radhika Radhakrishnan, participated in a roundtable discussion on "Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward." The event was organized by HEaL (Health, Ethics, and Law Institute of Training, Research and Advocacy) of FMES (Forum for Medical Ethics Society) in collaboration with CPS (Centre for Policy Studies), Indian Institute of Technology-Bombay.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika chaired a session on the ethics of AI in healthcare in India,       and my main submissions included: the medicalization of and       experimentation on women's bodies under a medical-industrial       complex for the design of AI-based healthcare models, and FAT       (Fairness, Accountability, Transparency) concerns with AI. She was also invited to draft some of this content into a       paper submission to the &lt;a href="https://ijme.in/"&gt;Indian Journal of Medical Ethics&lt;/a&gt; which is a peer-reviewed and indexed academic journal run by FMES.&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward'&gt;https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:47:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake">
    <title>Deepfakes: Algorithms at war, trust at stake</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake</link>
    <description>
        &lt;b&gt;A case in point is the video that surfaced of an Indian journalist not so long ago.&lt;/b&gt;
        &lt;p&gt;The article by Rajmohan Sudhakar was published in &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-on-the-move/deepfakes-algorithms-at-war-trust-at-stake-747042.html"&gt;Deccan Herald&lt;/a&gt; on July 14, 2019. Elonnai Hickok was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Now machines are learning to manipulate imagery. That is a real worry. Deepfakes for instance. They are AI-manipulated videos achieved by machine learning. Products of the humongous volume of images and videos now available online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The danger is, this imagery could be yours or mine. Imagine artificial intelligence of neural networks creating convincing identities of our real counterparts, and starts posting videos. Absurd.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Society has grappled with spurious and specious content in media over time. Media has been modified for various reasons, usually by those with access to significant resources and influence in the past,” says Elonnai Hickok, COO of the Bengaluru-based Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From an AI and machine learning perspective, deepfakes could be understood by what is known as GAN -- generative adversarial networks, essentially two algorithms at war. One is a generator, the other a discriminator. They compete with each other based on set inputs, in time bettering the version they together help create. These are behind what are now known as deepfakes of popular figures floating around online. Barack Obama is seen saying in a purported deepfake, “stay woke bitches”, which of course he did not say.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another deepfake has Mark Zuckerberg boasting: “I have total control of billions of people’s stolen data, all their secrets, their lives, their futures.” “Deepfakes are media modified by current technology and techniques. Easy availability of technology and media allows anyone to create, tailor or manipulate media for their own ends. Deepfakes present an opportunity for introspection and research into the contours of freedom of expression as well as societal frameworks for dealing with fake content,” explains Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the horrid instances of a deepfake-like attack was the video that surfaced of an Indian woman journalist not so long ago. Or the child-kidnapping rumours that spread through WhatsApp and the subsequent mob lynchings. However, there’s the view that in post-truth times, deepfakes would be seen with caution in the inherent dilemma over believing what one views online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“In India, people do not take these so seriously, especially on social media. It is mostly entertainment for many. Now, we are seeing people with diametrically opposing views. They often view content which they like to see. It would rather work as a reinforcer of views than a transformer,” feels political analyst Sandeep Shastri.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Open source software can create basic deepfakes if someone wanted to hurt somebody. The potential scale of danger and damage looms larger for influential figures and nations at war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“While deep fakes can be used to damage societies, it is important that collectively society takes steps to become sensitised to ways that media can be used to manipulate opinions and choices, and allow people to develop skills that build awareness and context to what they see and believe,” adds Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A video emerged recently of an ‘Iranian’ boat near an attacked oil tanker in the Persian Gulf. Deepfake or not, the authenticity of the video was questionable. If used wily, it could have triggered a war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hickok, society has to get more resilient to manipulation. “This includes spoken, written, seen as well as heard information. We have to learn to question the basis on which we confirm trust. Multiple forms of verification may help to address spurious media and information,” she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Deepfakes are no surprise as social media feed into the small and large divisions and differences of multitudes. Emergence of such potentially dangerous AIs isn’t taken quite seriously by the tech czars. In fact, it is a matter of economy for them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Oscar Schwartz writes in The Guardian that ‘technological solutionism’ in the ‘attention economy’ may not be the real approach. “And herein lies the problem: by formulating deepfakes as a technological problem, we allow social media platforms to promote technological solutions to those problems – cleverly distracting the public from the idea that there may be more fundamental problems with powerful Silicon Valley tech platforms,” Schwartz warns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The measures do not fall on the regulators alone. I think, individuals (by introspection and building awareness), society (through education), the legal system (stringent evidentiary requirements and capacity building) industry (differentiating recreational and prejudicial content, tagging content that is manipulated, etc.) and regulators (enabling accountability, oversight, transparency and redress) can all contribute to a more resilient society,” observes Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, viewing a video is still considered close to truth, almost sacred by the vast majority. Necessarily, it would not require a technologically advanced deepfake, especially in the backward rural pockets, to rile up and aggravate biases and prejudices.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Deepfakes can further existing biases and manipulate opinions and choices. They can disrupt trust inherent in societal groups to co-exist and politically, they can breed distrust in leadership and capability. That said, deepfakes can be used for humour and satire. Ultimately, the impact will be shaped by a number of factors including pre-existing biases, individual response, etc.,” Hickok elaborates.&lt;/p&gt;
&lt;p&gt;On a lighter note, deepfakes could be helpful too. We could very well do away with some of our television news presenters.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake'&gt;https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rajmohan Sudhakar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:42:12Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective">
    <title>What is the problem with ‘Ethical AI’? An Indian Perspective</title>
    <link>https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective</link>
    <description>
        &lt;b&gt;On 22 May 2019, the OECD member countries adopted the OECD Council Recommendation on Artificial Intelligence. The Principles, meant to provide an “ethical framework” for governing Artificial Intelligence (AI), were the first set of guidelines signed by multiple governments, including non-OECD members: Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Arindrajit Basu and Pranav M.B. was &lt;a class="external-link" href="https://cyberbrics.info/what-is-the-problem-with-ethical-ai-an-indian-perspective/"&gt;published by cyberBRICS&lt;/a&gt; on July 17, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;This was followed by the &lt;a href="https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf" rel="noreferrer noopener" target="_blank"&gt;G20 adopted human-centred AI Principles&lt;/a&gt; on June 9th. These are the latest in a slew of (&lt;a href="https://clinic.cyber.harvard.edu/2019/06/07/introducing-the-principled-artificial-intelligence-project/" rel="noreferrer noopener" target="_blank"&gt;at least 32!&lt;/a&gt;) public, and private ‘Ethical AI’ initiatives that seek to use ethics to guide the development, deployment and use of AI in a variety of use cases. They were conceived as a response to a range of concerns around algorithmic decision-making, including discrimination, privacy, and transparency in the decision-making process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, a noteworthy recent document that attempts to address these concerns is the &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;National Strategy for Artificial Intelligence&lt;/a&gt; published by the National Institution for Transforming India, also called &lt;em&gt;NITI Aayog&lt;/em&gt;, in June 2018. As the NITI Aayog Discussion paper acknowledges, India is the fastest growing economy with the second largest population in the world and has a significant stake in understanding and taking advantage of the AI revolution. For these reasons the goal pursued by the strategy is to establish the National Program on AI, with a view to guiding the research and development in new and emerging technologies, while addressing questions on ethics, privacy and security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While such initiatives and policy measures are critical to promulgating discourse and focussing awareness on the broad socio-economic impacts of AI, we fear that they are dangerously conflating tenets of existing legal principles and frameworks, such as human rights and constitutional law, with ethical principles – thereby diluting the scope of the former. While we agree that ethics and law can co-exist, ‘Ethical AI’ principles are often drafted in a manner that posits as voluntary positive obligations various actors have taken upon themselves as opposed to legal codes they necessarily have to comply with.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To have optimal impact, ‘Ethical AI’ should serve as a decision-making framework only in specific instances when human rights and constitutional law do not provide a ready and available answer.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Vague and unactionable&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Conceptually, ‘Ethical AI’ is a vague set of principles that are often difficult to define objectively. In this perspective, academics like Brett Mittelstadt of the Oxford Internet Institute &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293" rel="noreferrer noopener" target="_blank"&gt;argues&lt;/a&gt; that unlike in the field of medicine – where ethics has been used to design a professional code, ethics in AI suffers from four core flaws. First, developers lack a common aim or fiduciary duty to a consumer, which in the case of medicine is the health and well-being of the patient. Their primary duty lies to the company or institution that pays their bills, which often prevents them from realizing the extent of the moral obligation they owe to the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second is a lack of professional history which can help clarify the contours of well-defined norms of ‘good behaviour.’ In medicine, ethical principles can be applied to specific contexts by considering what similarly placed medical practitioners did in analogous past scenarios. Given the relative nascent emergence of AI solutions, similar professional codes are yet to develop.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Third is the absence of workable methods or sustained discourse on how these principles may be translated into practice. Fourth, and we believe most importantly, in addition to ethical codes, medicine is governed by a robust and stringent legal framework and strict legal and accountability mechanisms, which are absent in the case of ‘Ethical AI’. This absence gives both developers and policy-makers large room for manoeuvre.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, such focus on ethics may be a means of avoiding government regulation and the arm of the law. Indeed, due to its inherent flexibility and non-binding nature, ethics can be exploited as a piecemeal red herring solution to the problems posed by AI. Controllers of AI development are often profit-driven private entities, that gain reputational mileage by using the opportunity to extensively deliberate on broad ethical notions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Under the guise of meaningful ‘self-regulation’, several organisations publish internal ‘Ethical AI’ guidelines and principles, and &lt;a href="https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics"&gt;fund ethics research&lt;/a&gt; across the globe. In doing so, they occlude the shackles of binding obligation and deflect from attempts at tangible regulation.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Comparing Law to Ethics&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;This is in contrast to the well-defined jurisprudence that human rights and constitutional law offer, which should serve as the edifice of data-driven decision making in any context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the table below, we try to explain this point by looking at how three core fundamental rights enshrined both in our constitution and human rights instruments across the globe-right to privacy, right to equality/right against discrimination and due process-find themselves captured in three different sets of ‘Ethical AI frameworks.’ One of these inter-governmental &lt;a href="https://www.oecd.org/going-digital/ai/principles/" rel="noreferrer noopener" target="_blank"&gt;(OECD)&lt;/a&gt;, one devised by a private sector actor (‘&lt;a href="https://ai.google/principles/" rel="noreferrer noopener" target="_blank"&gt;Google AI&lt;/a&gt;’) and one by our very own, &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;NITI AAYOG.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cyberbrics.info/wp-content/uploads/2019/07/image.png" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With the exception of certain principles,most ‘Ethical AI’ principles are loosely worded as ‘‘seek to avoid’, ‘give opportunity for’, or ‘encourage’. A notable exception is the NITI AAYOG’s approach to protecting privacy in the context of AI. The document explicitly recommends the establishment of a national data protection framework for data protection, sectoral regulations that apply to specific contexts with the consideration of international standards such as GDPR as benchmarks. However, it fails to reference available constitutional standards when it discusses bias or explainability.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Several similar legal rules that have been enshrined in legal provisions -outlined and elucidated through years of case law and academic discourse – can be utilised to underscore and guide AI principles. However, existing AI principles do not adequately articulate how the legal rule can actually be applied to various scenarios by multiple organisations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We do not need a new “Law of Artificial Intelligence” to regulate this space. Judge Frank Easterbrook’s famous 1996 proclamation on the &lt;a href="https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&amp;amp;httpsredir=1&amp;amp;article=2147&amp;amp;context=journal_articles"&gt;‘Law of the Horse’&lt;/a&gt; through which he opposed the creation of a niche field of ‘cyberspace law’ comes to mind. He argued that a multitude of legal rules deal with ‘horses’, including the sale of horses, individuals kicked by horses, and with the licensing and racing of horses. Like with cyberspace, any attempt to arrive at a corpus of specialised ‘law of the horse’ would be shallow and ineffective.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Instead of fidgeting around for the next shiny regulatory tool, industry, practitioners, civil society and policy makers need to get back to the drawing board and think about applying the rich corpus of existing jurisprudence to AI governance.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;What is the role for ‘Ethical AI?’&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;What role can ‘ethical AI’ then play in forging robust and equitable governance of Artificial Intelligence? As it does in all other societal avenues, ‘ethical AI’ should serve as a framework for making legitimate algorithmic decisions in instances where law might not have an answer. An example of such a scenario is the &lt;a href="https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/" rel="noreferrer noopener" target="_blank"&gt;Project Maven saga&lt;/a&gt; – where 3,000 Google employees signed a petition opposing Google’s involvement with a US Department of Defense project by claiming that Google should not be involved in “the business of war.” There is no law-international or domestic that suggests that Project Maven-which was designed to study battlefield imagery using AI, was illegal. However, the debate at Google proceeded on ethical grounds and on the application of the ‘Ethical AI’ principles to this present context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We realise the importance of social norms and mores in carving out any regulatory space. We also appreciate the role of ethics in framing these norms for responsible behaviour. However, discourse across civil society, academic, industry and government circles all across the globe needs to bring law back into the discussion as a framing device. Not doing so risks diluting the debate and potential progress to a set of broad, unactionable principles that can easily be manipulated for private gain at the cost of public welfare.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective'&gt;https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Pranav M.B.</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T14:57:08Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier">
    <title>Roundtable Discussion on “The Future of AI Policy in India” @ ICRIER</title>
    <link>https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier</link>
    <description>
        &lt;b&gt;Radhika Radhakrishnan, attended a Roundtable Discussion on “The Future of AI Policy in India” organized by the Indian Council for Research on International Economic Relations (ICRIER) in New Delhi on July 1, 2019,  to arrive at actionable recommendations for promotion of AI in India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika's inputs primarily focused on - capacity and skilling for AI adoption in India, sectoral opportunities for the adoption of AI, regulation of explanations for AI, fairness and bias in AI models, and actionable recommendations for government priorites for AI policies in India.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concept Note&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;India’s Artificial Intelligence moment is truly here and now. At a time when a diverse range of applications based on AI are being developed, pushing its frontier further into uncharted realms of business and society, Indian policy makers are contemplating not just AI’s potential for growth and social transformation, but also its proclivity to create divides and inequality. Our study attempts to understand the impacts of AI and trace the pathways to realizing it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI’s transformational potential stems from its ability to lend itself to a diverse range of applications across a range of sectors. One can witness AI based applications in traditional spheres of manufacturing, which are transforming quality control, production lines, and supply chain management, and in services, which are creating personalized product offerings and high-quality customer engagement. AI applications are also common in sectors such as agriculture that have taken a back seat in technological innovations in the post-industrial world. AI also demonstrates potential for impacting developmental challenges by responding to societies’ immediate demand for healthcare, education and expanding access to finance and banking.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The consequences of AI diffusion stem from AI’s pervasiveness across society, its ability to trigger innovation, and its tendencies to undergo transformation and evolution. These are typical characteristics of a class of technologies that can be found across history, the emergence and diffusion of which have enabled the wealth of nations. These are called General Purpose Technologies (GPT). Technologies such as steam engine, electricity, computers, semi-conductors, and more recently the Internet, can all be conceived as belonging to the GPT class of technologies. Our study is based on the understanding that the implications of AI can be best understood by viewing AI as a GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Historically, the economic impacts of GPTs have not been immediate but follow after its diffusion across the economy, i.e. over a period of time. There are two reasons that explain this phenomenon: firstly, in early phases of technology diffusion, an economy diverts part of its resources from productive activities to costly activities aimed at enabling the GPT. For instance, organizations adopting computers must also invest in training employees or hire computer scientists, re-arrange production activities or organizational structures to accommodate computer driven work-flows, all of which are costly economic activities. Secondly, it is only after the GPT is diffused and widely used in the economy that the statistics measuring GDP start counting and fully measuring the GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Empirical research on GPTs such as AI, including ours, means confronting the challenge of measurement. Estimates on the economic impact of AI are bound to be imprecise because data on AI’s adoption is not available or adequately reflected in the data used to compute economic growth, at least not yet. Measuring the economic impact of AI is also difficult because of the magnitude of indirect effects on productivity that GPTs trigger. It is not therefore uncommon that studies on GPTs, while attempting to estimate their economic impacts, also engage in in-depth case studies and historical analysis of its impacts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Our findings show unambiguous and positive impacts of AI on firm level productivity across sectors, although there is variation in the magnitude of positive impacts across sectors. We complement our findings with case studies that cover different firms that are developing AI based applications across a range of sectors to understand the underlying firm-level capabilities that drive innovations in AI based applications. Our study leads us towards high-level policy challenges facing organizations, civil society and government, and which when addressed enable the full realization of economic growth triggered by AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, our conclusions are a step-away from actionable policy recommendations. Given your experience with and within India’s AI based ecosystem, we invite you to deliberate and recommend insights and strategies that can help us arrive at concrete and practicable policy recommendations towards achieving a growth and welfare enhancing AI-based ecosystem in India.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Proposed Questions for Deliberation&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;In which sectors do we observe an immediate opportunity for the adoption of AI? What could be the nature of these applications?&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;In which areas of AI development and application is there an immediate opportunity for governments, industry and academia to collaborate?&lt;/li&gt;
&lt;li&gt;What should be the Government’s top five priorities in the next one year to catalyse the growth of AI in India?&lt;/li&gt;
&lt;li&gt;How and what agencies of the Government should be involved in implementation of India’s National AI mission?&lt;/li&gt;
&lt;li&gt;What aspects of the Government’s capacity requires enhancement to adapt to challenges of a growing Indian AI based ecosystem?&lt;/li&gt;
&lt;li&gt;What measures can the Government take to regulate for AI safety and ethical use of AI?&lt;/li&gt;
&lt;li&gt;What are the policy measures that the Government can undertake to safeguard against the consequences of AI based inequality?&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier'&gt;https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-10T01:46:36Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond">
    <title>Fostering Strategic Convergence in US-India Tech Relations: 5G and Beyond</title>
    <link>https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond</link>
    <description>
        &lt;b&gt;The 2019 G-20 summit underscores the importance of fostering strategic convergence in U.S.-India tech relations.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Justin Sherman and Arindrajit Basu was &lt;a class="external-link" href="https://thediplomat.com/2019/07/fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond/"&gt;published in the Diplomat&lt;/a&gt; on July 3, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;As world leaders gathered for the G-20 summit in Osaka, Japan this past weekend, a multitude of issues from climate to trade to technology came to the fore. Much of the focus was on U.S.-China interactions at the summit, as the two nations are  locked in both a trade war and broader technological and geopolitical competition. Despite the present focus on the U.S. and China, however, it is crucial to not overlook another bilateral relationship of ever-growing importance in the process: The tech relationship between the United States and India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Certainly, the two countries have many disagreements on some technology issues. But this is a geopolitical relationship that is both strategically important for each country, and a vital opportunity for the two largest democracies in the world to collectively combat Chinese-style digital authoritarianism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Huawei and 5G&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;First, with respect to national security and 5G roll-outs, the U.S and India are not on the same page. The United States, for several months now, has been on a &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;diplomatic messaging tour&lt;/a&gt; of the world to try to convince — with great resistance (some would argue failure) — allies, partners, and potential partners alike to ban Chinese firm Huawei from supplying components of 5G networks. Many officials across Europe, the Middle East, South America, and elsewhere have been reluctant to ban Huawei per the U.S. recommendation, and India is no exception. Indeed, National Security Advisory Board Chairman P.S. Raghavan &lt;a href="https://www.thehindu.com/news/national/on-5g-and-data-india-stands-with-developing-world-not-us-japan-at-g20/article28207169.ece/amp/?__twitter_impression=true" target="_blank"&gt;told&lt;/a&gt; &lt;em&gt;The Hindu&lt;/em&gt; that “5G is becoming a fault line in the technology cold war between world powers” and that India must avoid getting caught in these fault lines.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In large part, U.S. diplomatic messaging here has fallen short due to &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;heavy conflations&lt;/a&gt; of national security- and trade-related risks; and Trump only contributed further to this fact with his latest &lt;a href="https://twitter.com/JenniferJJacobs/status/1145072073800183808" target="_blank"&gt;reference&lt;/a&gt; to Huawei, during the G-20, as a potential trade war bargaining chip. The sheer population of India, however, combined with its fast growing technology sectors and &lt;a href="http://www.cmai.asia/digitalindia/" target="_blank"&gt;desire to digitize&lt;/a&gt;, makes the country an important market player when it comes to the 5G revolution. U.S.-India engagement on 5G issues must be managed effectively through robust articulation of each country’s national interests underscored by a clean segregation of trade and security questions in the discussion. This partnership has the potential to wield great influence in the global market, including in ways that could prioritize or deprioritize certain 5G equipment suppliers (like Huawei).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data Sovereignty and Data Privacy&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sovereignty is another hot area in which the U.S.-India tech relationship demands careful negotiation. Over the past year, the Indian government has &lt;a href="https://twitter.com/cis_india/status/1143096429298085889" target="_blank"&gt;introduced a range of policy instruments&lt;/a&gt; which dictate that certain kinds of data must be stored in servers located physically within India — termed “&lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;data localization&lt;/a&gt;.” While there are &lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;a number of policy objectives&lt;/a&gt; this gambit ostensibly seeks to serve, the two which stand out are (1) the presently cumbersome process for Indian law enforcement agencies to access data stored in the U.S. during criminal investigations, and (2) extractive economic models used by U.S. companies operating in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A range of conflicting developments emerging from the G-20 summit underscore this fact. India, along with the BRICS grouping, &lt;a href="https://mea.gov.in/bilateral-documents.htm?dtl/31506/Joint+Statement+on+BRICS+Leaders+Informal+Meeting+on+the+margins+of+G20+Summit" target="_blank"&gt;focused&lt;/a&gt; on the development dimensions of data governance and re-emphasized the need for &lt;a href="https://www.youtube.com/watch?v=0a8YsZQ0F6k&amp;amp;feature=youtu.be" target="_blank"&gt;data sovereignty&lt;/a&gt; — broadly understood as the sovereign right of nations to govern data in their national interest for the welfare of their citizens. President Trump &lt;a href="https://www.whitehouse.gov/briefings-statements/remarks-president-trump-g20-leaders-special-event-digital-economy-osaka-japan/" target="_blank"&gt;reigned in his focus&lt;/a&gt; on the need for cross-border data flows and, in direct opposition to some proposals that have emerged from India, explicitly opposed data localization. While India did not sign the &lt;a href="https://www.international.gc.ca/world-monde/international_relations-relations_internationales/g20/2019-06-29-g20_declaration-declaration_g20.aspx?lang=eng" target="_blank"&gt;Osaka Declaration on the Digital Economy&lt;/a&gt; that promoted cross-border data flows, the importance of cross-border data flows in spurring the global economy did find its way into the &lt;a href="https://g20.org/pdf/documents/en/FINAL_G20_Osaka_Leaders_Declaration.pdf" target="_blank"&gt;Final G-20 Leaders Declaration&lt;/a&gt; — which, of course, both countries signed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Geopolitically, the importance of India’s data governance stance cannot be overstated as it could pave the way for the approach adopted by other emerging economies — most notably the BRICS countries. Likewise, the U.S. has important thinking to do around such questions as what shape a national data privacy law could take. Even though the two countries’ views on data may be quite different from one another, the seats that India and the U.S. have at the table for &lt;a href="https://www.theatlantic.com/international/archive/2019/06/g20-data/592606/" target="_blank"&gt;global data governance discussions&lt;/a&gt; — alongside others like Japan, China, and the European Union — underscore the value of meaningful interactions and mutual trust and respect on this issue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Norms for a Democratic Digital Future&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, as the &lt;a href="https://www.un.org/disarmament/ict-security/" target="_blank"&gt;United Nations Group of Governmental Experts&lt;/a&gt; and the &lt;a href="https://www.un.org/disarmament/open-ended-working-group/" target="_blank"&gt;Open-Ended Working Group&lt;/a&gt; meet to resurrect the norm-formulation process for fostering responsible state behavior in cyberspace, India has some homework to do.  Even though it has been a member of five out of the six Group of Governmental Experts set up thus far, India is yet to come out with a public statement delineating its views on the applicability of International Law applies in cyberspace. Further, India has also failed to articulate a cohesive digital strategy — instead relying on a patchwork of hastily rolled out and often ill-conceived regulatory policies, some of which commentators in the West &lt;a href="https://www.nytimes.com/2019/02/14/technology/india-internet-censorship.html" target="_blank"&gt;have hastily labeled&lt;/a&gt; as digital authoritarianism. The U.S., for its part, amidst a &lt;a href="https://www.newamerica.org/cybersecurity-initiative/c2b/c2b-log/four-opportunities-for-states-new-cyber-bureau/" target="_blank"&gt;cutback&lt;/a&gt; to diplomatic cyber engagement (as part of cutbacks to diplomacy writ large), could also up its support of international engagement on these issues. Its recent repeal of net neutrality protections could also be argued as a step back from long-time international &lt;a href="https://d1y8sb8igg2f8e.cloudfront.net/documents/The_Idealized_Internet_vs._Internet_Realities_Version_1.0_2018-07-25_203930.pdf" target="_blank"&gt;norm promotion&lt;/a&gt; around internet openness.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through a combination of domestic policy gambits and foreign policy maneuvers, both states need to draw lines in the sand that safeguard human rights, international law, and democracy online, while arriving at some balance with each other’s national interests.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A primary example lies with artificial intelligence (AI). AI has found increasing use in digital authoritarianism, as dictators use automated, intelligent systems to boost their surveillance capabilities. The Chinese government has arguably been at the &lt;a href="https://freedomhouse.org/report/freedom-net/freedom-net-2018" target="_blank"&gt;forefront&lt;/a&gt; of this enhanced level of authoritarian rule for the digital age.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In addition to &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-government-outlines-ai-ambitions-through-2020/" target="_blank"&gt;focusing&lt;/a&gt; on AI applications for everything from natural language processing to self-driving cars — through investments, strategies, policy documents, and so on — Beijing has also been &lt;a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank"&gt;deploying&lt;/a&gt; AI in the service of large-scale human-rights abuses. Chinese strategy papers on AI, while similarly emphasizing many commercial or benign applications and raising attention to such issues as algorithmic fairness, concurrently have &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/online-symposium-chinese-thinking-ai-security-comparative-context/" target="_blank"&gt;discussed&lt;/a&gt; using AI for “social governance,” censorship, and surveillance. To combat the rising intersection of AI and digital authoritarianism, the U.S. and India could wield enormous leverage — as the two largest democracies in the world — in governing these technologies in a democratic fashion that counters &lt;a href="https://www.newamerica.org/cybersecurity-initiative/reports/essay-reframing-the-us-china-ai-arms-race/" target="_blank"&gt;dangerous arms-race narratives&lt;/a&gt; and uses of AI for surveillance and repression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The same goes for paying attention to technology exports and diffusion to human-rights abusers. For instance, companies incorporated in China, among those incorporated elsewhere, have been &lt;a href="https://www.cfr.org/blog/authoritarians-are-exporting-surveillance-tech-and-it-their-vision-internet" target="_blank"&gt;heavily involved&lt;/a&gt; in exports of dual-use surveillance technologies to other countries, including those with questionable or outright poor human-rights records. Although companies incorporated in democracies may engage in such practices as well, most democracies take steps to curtail these practices as much as possible, such as through the multilateral Wassenaar Arrangement — which lays out export controls around conventional weapons and dual-use goods and technologies. The U.S. has long been a party to this agreement, and India &lt;a href="https://economictimes.indiatimes.com/news/defence/wassenaar-arrangement-decides-to-make-india-its-member/articleshow/61975192.cms?from=mdr" target="_blank"&gt;officially joined&lt;/a&gt; in 2018. Arguments persist about the extent to which Beijing is involved in these dual-use surveillance technology exports, but these exports may only increase going forward as companies &lt;a href="https://www.newamerica.org/weekly/edition-254/long-view-digital-authoritarianism/" target="_blank"&gt;increasingly&lt;/a&gt; sell not just internet surveillance tools but also dual-use AI tools. In this way, too, India and the U.S. could play an important role in countering the spread of such capabilities to human-rights abusers and standing against the spread of digital authoritarianism in the process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The relationship here is, therefore, one that requires careful navigation for its significant geopolitical, economic, and ideological consequences. For the future of the technological relationship between the world’s largest democracies—and the extent to which they respect each other’s strategic autonomy while converging on issues of mutual interest—could determine the future of global digital governance.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond'&gt;https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Justin Sherman and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-05T02:19:09Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-for-good-workshop">
    <title>AI for Good Workshop</title>
    <link>https://cis-india.org/internet-governance/news/ai-for-good-workshop</link>
    <description>
        &lt;b&gt;Pranav Manjesh Bidare attended a workshop on AI for Good, organised by Swissnex India, and Wadhwani AI in Bangalore on May 22, 2019. &lt;/b&gt;
        &lt;p&gt;The workshop was a forerunner to the &lt;a class="external-link" href="https://aiforgood.itu.int/"&gt;AI for Good Global Summit&lt;/a&gt;. More recommendations can be made at  &lt;a class="moz-txt-link-freetext" href="https://www.policykitchen.com/group/19/stream"&gt;https://www.policykitchen.com/group/19/stream&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-for-good-workshop'&gt;https://cis-india.org/internet-governance/news/ai-for-good-workshop&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-06-05T14:47:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china">
    <title>MWC19 Shanghai AI and Trust in APAC and China</title>
    <link>https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china</link>
    <description>
        &lt;b&gt;Sunil Abraham will be making a presentation at the summit on AI and Trust in APAC and China at MWC19 Shanghai on June 27, 2019.  Sunil has been invited as a speaker on panel ‘Framing AI for Digital Upstarts’.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;MWC Shanghai is a three-day conference and exhibition bringing together over 200 AI business leaders, 65,000 attendees, and 550 companies from across different industries and perspectives to address business and technical concerns in the Intelligent Connectivity era and debate tough problems for today and tomorrow. More &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/mwc19-shanghai-ai-and-trust-in-apac-and-china"&gt;info here&lt;/a&gt;. For event details &lt;a class="external-link" href="https://www.mwcshanghai.com/session/ai-trust-in-apac-and-china/"&gt;see this page&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china'&gt;https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-06-05T07:10:50Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
