<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 31 to 45.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/future-tech-and-future-law"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad">
    <title>New intermediary guidelines: The good and the bad </title>
    <link>https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</link>
    <description>
        &lt;b&gt;In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. &lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This article originally appeared in the Down to Earth &lt;a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693"&gt;magazine&lt;/a&gt;. Reposted with permission.&lt;/p&gt;
&lt;p&gt;-------&lt;/p&gt;
&lt;p&gt;The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.&lt;/p&gt;
&lt;p&gt;These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.&lt;/p&gt;
&lt;p&gt;The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.&lt;/p&gt;
&lt;p&gt;These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.&lt;/p&gt;
&lt;p&gt;Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What changed for the good?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.&lt;/p&gt;
&lt;p&gt;Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.&lt;/p&gt;
&lt;p&gt;Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.&lt;/p&gt;
&lt;p&gt;The new rules, therefore, envisage three types of entities:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.&lt;/li&gt;&lt;li&gt;There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.&lt;/li&gt;&lt;li&gt;The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.&lt;/p&gt;
&lt;p&gt;I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.&lt;/p&gt;
&lt;p&gt;Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.&lt;/p&gt;
&lt;p&gt;One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression&lt;/li&gt;&lt;li&gt;The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.&lt;/p&gt;
&lt;p&gt;This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What should go?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.&lt;/p&gt;
&lt;p&gt;In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.&lt;/p&gt;
&lt;p&gt;It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.&lt;/p&gt;
&lt;p&gt;Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.&lt;/p&gt;
&lt;p&gt;Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The author would like to thank Gurshabad Grover for his editorial suggestions.&amp;nbsp;&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'&gt;https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>TorShark</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>IT Act</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-03-15T13:52:46Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china">
    <title>MWC19 Shanghai AI and Trust in APAC and China</title>
    <link>https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china</link>
    <description>
        &lt;b&gt;Sunil Abraham will be making a presentation at the summit on AI and Trust in APAC and China at MWC19 Shanghai on June 27, 2019.  Sunil has been invited as a speaker on panel ‘Framing AI for Digital Upstarts’.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;MWC Shanghai is a three-day conference and exhibition bringing together over 200 AI business leaders, 65,000 attendees, and 550 companies from across different industries and perspectives to address business and technical concerns in the Intelligent Connectivity era and debate tough problems for today and tomorrow. More &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/mwc19-shanghai-ai-and-trust-in-apac-and-china"&gt;info here&lt;/a&gt;. For event details &lt;a class="external-link" href="https://www.mwcshanghai.com/session/ai-trust-in-apac-and-china/"&gt;see this page&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china'&gt;https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-06-05T07:10:50Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence">
    <title>International Conference on Justice Education:Legal Implications of Artificial Intelligence</title>
    <link>https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence</link>
    <description>
        &lt;b&gt;Arindrajit Basu attended the International Conference on Justice Education with the theme "Artificial Intelligence and its Legal Implications" at Institute of Law Nirma University. The event was organized by Nirma University in Ahmedabad on March 15 - 16, 2019. Arindrajit was a theme speaker for the panel on Legal Implications of Artificial Intelligence and was a judge of the presentations in the same session.&lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/icje-conference-schedule"&gt;read the agenda&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence'&gt;https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-20T15:52:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light">
    <title>Insult to Kannada shows Google AI in a poor light</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</link>
    <description>
        &lt;b&gt;A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
&lt;/b&gt;
        &lt;p&gt;The article by Krupa Joseph was &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html"&gt;published in Deccan Herald&lt;/a&gt; on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Fallible process&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Periodic checks&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Who was upto mischief?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Furore and after&lt;/h3&gt;
&lt;p&gt;‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”&lt;/p&gt;
&lt;p&gt;Google removed the answer and issued a statement:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'&gt;https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Krupa Joseph</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-06-26T05:25:38Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes">
    <title>Impact of Industrial Revolution 4.0 - IT and Automotive Sector in India by the Dialogue and FES</title>
    <link>https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes</link>
    <description>
        &lt;b&gt;On August 21, 2019, Aayush Rathi, attended a report launch event and focus group discussion on the "Impact of Industrial Revolution 4.0 - IT and Automotive Sector in India". Research conducted by the Dialogue in collaboration with the Friedrich-Ebert-Stiftung (FES) were being presented. &lt;/b&gt;
        &lt;p class="moz-quote-pre" style="text-align: justify; "&gt;At CIS, we have previously produced research on the future of work in these sectors. Aayush attended the event to understand how other researchers are approaching the subject of the future of work in terms of the methodological approach and the questions being asked and policy responses being proposed. In what may be treated as validation of our research design, FES and the Dialogue have addressed similar questions and adopted an empirical+desk based approach to do so as well.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes'&gt;https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-27T00:13:32Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/future-tech-and-future-law">
    <title>Future Tech and Future Law</title>
    <link>https://cis-india.org/internet-governance/news/future-tech-and-future-law</link>
    <description>
        &lt;b&gt;The Dept. of IT &amp; BT, Government of Karnataka organised the 21st edition of Bengaluru Tech Summit from November 29, 2018 to December 1, 2018 at Palace Grounds, Bengaluru. Arindrajit Basu was a speaker at the panel on 'Future Tech and Future Law'.&lt;/b&gt;
        &lt;p class="moz-quote-pre" style="text-align: justify; "&gt;The discussion was moderated by Tanvi Ratna. Aayush's co-panelists were Apar Gupta,Jaideep Reddy and Nilesh Trivedi. During his remarks, he attempted to focus  on our AI research thus far and our suggestions for AI regulation.&lt;/p&gt;
&lt;p class="moz-quote-pre" style="text-align: justify; "&gt;For more details &lt;a class="external-link" href="https://www.bengalurutechsummit.com/"&gt;see this page&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/future-tech-and-future-law'&gt;https://cis-india.org/internet-governance/news/future-tech-and-future-law&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-01-03T01:17:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond">
    <title>Fostering Strategic Convergence in US-India Tech Relations: 5G and Beyond</title>
    <link>https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond</link>
    <description>
        &lt;b&gt;The 2019 G-20 summit underscores the importance of fostering strategic convergence in U.S.-India tech relations.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Justin Sherman and Arindrajit Basu was &lt;a class="external-link" href="https://thediplomat.com/2019/07/fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond/"&gt;published in the Diplomat&lt;/a&gt; on July 3, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;As world leaders gathered for the G-20 summit in Osaka, Japan this past weekend, a multitude of issues from climate to trade to technology came to the fore. Much of the focus was on U.S.-China interactions at the summit, as the two nations are  locked in both a trade war and broader technological and geopolitical competition. Despite the present focus on the U.S. and China, however, it is crucial to not overlook another bilateral relationship of ever-growing importance in the process: The tech relationship between the United States and India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Certainly, the two countries have many disagreements on some technology issues. But this is a geopolitical relationship that is both strategically important for each country, and a vital opportunity for the two largest democracies in the world to collectively combat Chinese-style digital authoritarianism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Huawei and 5G&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;First, with respect to national security and 5G roll-outs, the U.S and India are not on the same page. The United States, for several months now, has been on a &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;diplomatic messaging tour&lt;/a&gt; of the world to try to convince — with great resistance (some would argue failure) — allies, partners, and potential partners alike to ban Chinese firm Huawei from supplying components of 5G networks. Many officials across Europe, the Middle East, South America, and elsewhere have been reluctant to ban Huawei per the U.S. recommendation, and India is no exception. Indeed, National Security Advisory Board Chairman P.S. Raghavan &lt;a href="https://www.thehindu.com/news/national/on-5g-and-data-india-stands-with-developing-world-not-us-japan-at-g20/article28207169.ece/amp/?__twitter_impression=true" target="_blank"&gt;told&lt;/a&gt; &lt;em&gt;The Hindu&lt;/em&gt; that “5G is becoming a fault line in the technology cold war between world powers” and that India must avoid getting caught in these fault lines.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In large part, U.S. diplomatic messaging here has fallen short due to &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;heavy conflations&lt;/a&gt; of national security- and trade-related risks; and Trump only contributed further to this fact with his latest &lt;a href="https://twitter.com/JenniferJJacobs/status/1145072073800183808" target="_blank"&gt;reference&lt;/a&gt; to Huawei, during the G-20, as a potential trade war bargaining chip. The sheer population of India, however, combined with its fast growing technology sectors and &lt;a href="http://www.cmai.asia/digitalindia/" target="_blank"&gt;desire to digitize&lt;/a&gt;, makes the country an important market player when it comes to the 5G revolution. U.S.-India engagement on 5G issues must be managed effectively through robust articulation of each country’s national interests underscored by a clean segregation of trade and security questions in the discussion. This partnership has the potential to wield great influence in the global market, including in ways that could prioritize or deprioritize certain 5G equipment suppliers (like Huawei).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data Sovereignty and Data Privacy&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sovereignty is another hot area in which the U.S.-India tech relationship demands careful negotiation. Over the past year, the Indian government has &lt;a href="https://twitter.com/cis_india/status/1143096429298085889" target="_blank"&gt;introduced a range of policy instruments&lt;/a&gt; which dictate that certain kinds of data must be stored in servers located physically within India — termed “&lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;data localization&lt;/a&gt;.” While there are &lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;a number of policy objectives&lt;/a&gt; this gambit ostensibly seeks to serve, the two which stand out are (1) the presently cumbersome process for Indian law enforcement agencies to access data stored in the U.S. during criminal investigations, and (2) extractive economic models used by U.S. companies operating in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A range of conflicting developments emerging from the G-20 summit underscore this fact. India, along with the BRICS grouping, &lt;a href="https://mea.gov.in/bilateral-documents.htm?dtl/31506/Joint+Statement+on+BRICS+Leaders+Informal+Meeting+on+the+margins+of+G20+Summit" target="_blank"&gt;focused&lt;/a&gt; on the development dimensions of data governance and re-emphasized the need for &lt;a href="https://www.youtube.com/watch?v=0a8YsZQ0F6k&amp;amp;feature=youtu.be" target="_blank"&gt;data sovereignty&lt;/a&gt; — broadly understood as the sovereign right of nations to govern data in their national interest for the welfare of their citizens. President Trump &lt;a href="https://www.whitehouse.gov/briefings-statements/remarks-president-trump-g20-leaders-special-event-digital-economy-osaka-japan/" target="_blank"&gt;reigned in his focus&lt;/a&gt; on the need for cross-border data flows and, in direct opposition to some proposals that have emerged from India, explicitly opposed data localization. While India did not sign the &lt;a href="https://www.international.gc.ca/world-monde/international_relations-relations_internationales/g20/2019-06-29-g20_declaration-declaration_g20.aspx?lang=eng" target="_blank"&gt;Osaka Declaration on the Digital Economy&lt;/a&gt; that promoted cross-border data flows, the importance of cross-border data flows in spurring the global economy did find its way into the &lt;a href="https://g20.org/pdf/documents/en/FINAL_G20_Osaka_Leaders_Declaration.pdf" target="_blank"&gt;Final G-20 Leaders Declaration&lt;/a&gt; — which, of course, both countries signed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Geopolitically, the importance of India’s data governance stance cannot be overstated as it could pave the way for the approach adopted by other emerging economies — most notably the BRICS countries. Likewise, the U.S. has important thinking to do around such questions as what shape a national data privacy law could take. Even though the two countries’ views on data may be quite different from one another, the seats that India and the U.S. have at the table for &lt;a href="https://www.theatlantic.com/international/archive/2019/06/g20-data/592606/" target="_blank"&gt;global data governance discussions&lt;/a&gt; — alongside others like Japan, China, and the European Union — underscore the value of meaningful interactions and mutual trust and respect on this issue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Norms for a Democratic Digital Future&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, as the &lt;a href="https://www.un.org/disarmament/ict-security/" target="_blank"&gt;United Nations Group of Governmental Experts&lt;/a&gt; and the &lt;a href="https://www.un.org/disarmament/open-ended-working-group/" target="_blank"&gt;Open-Ended Working Group&lt;/a&gt; meet to resurrect the norm-formulation process for fostering responsible state behavior in cyberspace, India has some homework to do.  Even though it has been a member of five out of the six Group of Governmental Experts set up thus far, India is yet to come out with a public statement delineating its views on the applicability of International Law applies in cyberspace. Further, India has also failed to articulate a cohesive digital strategy — instead relying on a patchwork of hastily rolled out and often ill-conceived regulatory policies, some of which commentators in the West &lt;a href="https://www.nytimes.com/2019/02/14/technology/india-internet-censorship.html" target="_blank"&gt;have hastily labeled&lt;/a&gt; as digital authoritarianism. The U.S., for its part, amidst a &lt;a href="https://www.newamerica.org/cybersecurity-initiative/c2b/c2b-log/four-opportunities-for-states-new-cyber-bureau/" target="_blank"&gt;cutback&lt;/a&gt; to diplomatic cyber engagement (as part of cutbacks to diplomacy writ large), could also up its support of international engagement on these issues. Its recent repeal of net neutrality protections could also be argued as a step back from long-time international &lt;a href="https://d1y8sb8igg2f8e.cloudfront.net/documents/The_Idealized_Internet_vs._Internet_Realities_Version_1.0_2018-07-25_203930.pdf" target="_blank"&gt;norm promotion&lt;/a&gt; around internet openness.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through a combination of domestic policy gambits and foreign policy maneuvers, both states need to draw lines in the sand that safeguard human rights, international law, and democracy online, while arriving at some balance with each other’s national interests.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A primary example lies with artificial intelligence (AI). AI has found increasing use in digital authoritarianism, as dictators use automated, intelligent systems to boost their surveillance capabilities. The Chinese government has arguably been at the &lt;a href="https://freedomhouse.org/report/freedom-net/freedom-net-2018" target="_blank"&gt;forefront&lt;/a&gt; of this enhanced level of authoritarian rule for the digital age.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In addition to &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-government-outlines-ai-ambitions-through-2020/" target="_blank"&gt;focusing&lt;/a&gt; on AI applications for everything from natural language processing to self-driving cars — through investments, strategies, policy documents, and so on — Beijing has also been &lt;a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank"&gt;deploying&lt;/a&gt; AI in the service of large-scale human-rights abuses. Chinese strategy papers on AI, while similarly emphasizing many commercial or benign applications and raising attention to such issues as algorithmic fairness, concurrently have &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/online-symposium-chinese-thinking-ai-security-comparative-context/" target="_blank"&gt;discussed&lt;/a&gt; using AI for “social governance,” censorship, and surveillance. To combat the rising intersection of AI and digital authoritarianism, the U.S. and India could wield enormous leverage — as the two largest democracies in the world — in governing these technologies in a democratic fashion that counters &lt;a href="https://www.newamerica.org/cybersecurity-initiative/reports/essay-reframing-the-us-china-ai-arms-race/" target="_blank"&gt;dangerous arms-race narratives&lt;/a&gt; and uses of AI for surveillance and repression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The same goes for paying attention to technology exports and diffusion to human-rights abusers. For instance, companies incorporated in China, among those incorporated elsewhere, have been &lt;a href="https://www.cfr.org/blog/authoritarians-are-exporting-surveillance-tech-and-it-their-vision-internet" target="_blank"&gt;heavily involved&lt;/a&gt; in exports of dual-use surveillance technologies to other countries, including those with questionable or outright poor human-rights records. Although companies incorporated in democracies may engage in such practices as well, most democracies take steps to curtail these practices as much as possible, such as through the multilateral Wassenaar Arrangement — which lays out export controls around conventional weapons and dual-use goods and technologies. The U.S. has long been a party to this agreement, and India &lt;a href="https://economictimes.indiatimes.com/news/defence/wassenaar-arrangement-decides-to-make-india-its-member/articleshow/61975192.cms?from=mdr" target="_blank"&gt;officially joined&lt;/a&gt; in 2018. Arguments persist about the extent to which Beijing is involved in these dual-use surveillance technology exports, but these exports may only increase going forward as companies &lt;a href="https://www.newamerica.org/weekly/edition-254/long-view-digital-authoritarianism/" target="_blank"&gt;increasingly&lt;/a&gt; sell not just internet surveillance tools but also dual-use AI tools. In this way, too, India and the U.S. could play an important role in countering the spread of such capabilities to human-rights abusers and standing against the spread of digital authoritarianism in the process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The relationship here is, therefore, one that requires careful navigation for its significant geopolitical, economic, and ideological consequences. For the future of the technological relationship between the world’s largest democracies—and the extent to which they respect each other’s strategic autonomy while converging on issues of mutual interest—could determine the future of global digital governance.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond'&gt;https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Justin Sherman and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-05T02:19:09Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules">
    <title>Finding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules </title>
    <link>https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules</link>
    <description>
        &lt;b&gt;On the 25th of February this year The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new Rules broaden the scope of which entities can be considered as intermediaries to now include curated-content platforms (Netflix) as well as digital news publications. This blogpost analyzes the rule on automated filtering, in the context of the growing use of automated content moderation. 
&lt;/b&gt;
        
&lt;p class="p1"&gt;&lt;span class="s1"&gt;This article first &lt;a class="external-link" href="https://www.law.kuleuven.be/citip/blog/finding-needles-in-haystacks/"&gt;appeared&lt;/a&gt; on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;----&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Mathew Sag in his 2018 &lt;a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?article=4761&amp;amp;context=ndlr"&gt;&lt;span class="s2"&gt;paper&lt;/span&gt;&lt;/a&gt; on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “&lt;em&gt;content creator&lt;/em&gt;” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;Who is an Intermediary?&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;In India the definition of “&lt;em&gt;intermediary&lt;/em&gt;” is found under Section 2(w) of the &lt;a href="https://www.meity.gov.in/writereaddata/files/itbill2000.pdf"&gt;&lt;span class="s2"&gt;Information Technology (IT) Act 2000&lt;/span&gt;&lt;/a&gt;, which defines an Intermediary as &lt;em&gt;“with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”.&lt;/em&gt; The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been&amp;nbsp; published periodically (&lt;a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"&gt;&lt;span class="s2"&gt;2011&lt;/span&gt;&lt;/a&gt;, &lt;a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"&gt;&lt;span class="s2"&gt;2018&lt;/span&gt;&lt;/a&gt; and &lt;a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"&gt;&lt;span class="s2"&gt;2021&lt;/span&gt;&lt;/a&gt;). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in&lt;a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"&gt;&lt;span class="s2"&gt; 2011&lt;/span&gt;&lt;/a&gt;, followed by the introduction of draft amendments to the law in&lt;a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"&gt;&lt;span class="s2"&gt; 2018&lt;/span&gt;&lt;/a&gt; and finally the latest &lt;a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"&gt;&lt;span class="s2"&gt;2021 &lt;/span&gt;&lt;/a&gt;version, which would supersede the earlier Rules of 2011.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;The Growing Use of Automated Content Moderation&amp;nbsp;&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “&lt;em&gt;The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content&lt;/em&gt;”.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “&lt;em&gt;appropriate mechanisms&lt;/em&gt;” and with “&lt;em&gt;appropriate controls&lt;/em&gt;”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;The lack of clear guidelines and list of content that can be removed had&amp;nbsp; left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the &lt;a href="https://cis-india.org/internet-governance/chilling-effects-on-free-expression-on-internet"&gt;&lt;span class="s2"&gt;Rules&lt;/span&gt;&lt;/a&gt;, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined&amp;nbsp; with automated filtering could have resulted in a number of &lt;a href="https://cis-india.org/internet-governance/how-india-censors-the-web-websci#:~:text=One%2520of%2520the%2520primary%2520ways,certain%2520websites%2520for%2520its%2520users."&gt;&lt;span class="s2"&gt;unwarranted take downs&lt;/span&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content&lt;a href="https://www.medianama.com/2020/03/223-facebook-content-moderation-coronavirus-medianamas-take/"&gt;&lt;span class="s2"&gt; moderation&lt;/span&gt;&lt;/a&gt;. Though the use of automated content removal seems like the right step considering the &lt;a href="https://www.businessinsider.in/tech/news/facebook-content-moderator-who-quit-reportedly-wrote-a-blistering-letter-citing-stress-induced-insomnia-among-other-trauma/articleshow/82075608.cms"&gt;&lt;span class="s2"&gt;trauma &lt;/span&gt;&lt;/a&gt;that human moderators had to go through,&amp;nbsp; the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19&amp;nbsp; wave, the Ministry of Electronics and Information Technology has &lt;a href="https://www.thehindu.com/news/national/govt-asks-social-media-platforms-to-remove-100-covid-19-related-posts/article34406733.ece"&gt;&lt;span class="s2"&gt;asked &lt;/span&gt;&lt;/a&gt;social media platforms to remove “&lt;em&gt;unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols&lt;/em&gt;”.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;The New IL Rules - A ray of hope?&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s3"&gt;The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “&lt;/span&gt;&lt;span class="s1"&gt;significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “&lt;em&gt;intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”&lt;/em&gt; .The Rules also go a step further to create another type of intermediary, the&amp;nbsp; significant social media intermediary. A significant social media intermediary is defined as one “&lt;em&gt;having a number of registered users in India above such threshold as notified by the Central Government&lt;/em&gt;''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s4"&gt;Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be &lt;/span&gt;&lt;span class="s1"&gt;proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “&lt;em&gt;appropriate human oversight&lt;/em&gt;” as well as a periodic review of the tools used for content moderation. The Rules by using the term “&lt;em&gt;shall endeavor&lt;/em&gt;” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means&amp;nbsp; that the requirement is now on a best effort basis, as opposed to the word “&lt;em&gt;shall&lt;/em&gt;” in the 2018 version of the Rules, which made it mandatory.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “&lt;em&gt;proportionate&lt;/em&gt;”, “&lt;em&gt;having regard to free speech&lt;/em&gt;” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified&amp;nbsp; as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content &lt;a href="https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online"&gt;&lt;span class="s2"&gt;online.&lt;/span&gt;&lt;span class="s5"&gt; With the use of proactive filtering through automated means the content can be removed almost immediately.&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span class="s6"&gt; &lt;/span&gt;&lt;span class="s1"&gt;With India’s current trend of new internet users, some of these creators would also be &lt;a href="https://timesofindia.indiatimes.com/business/india-business/for-the-first-time-india-has-more-rural-net-users-than-urban/articleshow/75566025.cms"&gt;&lt;span class="s2"&gt;first time users&lt;/span&gt;&lt;/a&gt; of the internet.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;The need for automated removal of content is understandable, based not only on&amp;nbsp; the sheer volume of content but also&amp;nbsp; the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules'&gt;https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Torsha Sarkar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-08-03T07:28:53Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future">
    <title>Farming the Future: Deployment of Artificial Intelligence in the agricultural sector in India</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future</link>
    <description>
        &lt;b&gt;This case study was published as a chapter in the joint UNESCAP-Google publication titled Artificial Intelligence in Public Service Delivery. The chapter in its final form would not have been possible without the efforts and very useful interventions by our colleagues at Digital Asia Hub,Google, and UNESCAP.&lt;/b&gt;
        &lt;p&gt;&lt;img src="https://cis-india.org/home-images/Findings.jpg" alt="Findings" class="image-inline" title="Findings" /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although agriculture is a critical sector for India’s economic development, it continues to face many challenges including a lack of &lt;span&gt;modernization of agricultural methods, fragmented landholdings, erratic rainfalls, overuse of groundwater and a lack of access to &lt;/span&gt;&lt;span&gt;information on weather, markets and pricing. As state governments create policies and frameworks to mitigate these challenges, the &lt;/span&gt;&lt;span&gt;role of technology has often come up as a potential driver of positive change.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Farmers in the southern Indian states of Karnataka and Andhra Pradesh are facing significant challenges. For hundreds of years,these farmers have relied on traditional agricultural methods to make sowing and harvesting decisions, but now volatile weather patterns and shifting monsoon seasons are making such ancient wisdom obsolete. Farmers are unable to predict weather patterns or crop yields accurately, making it difficult for them to make informed financial and operational decisions associated with planting and harvesting. Erratic weather patterns particularly affect those farmers who reside in remote areas, cut off from meaningful accessto infrastructure and information. In addition to a lack of vital weather information, farmers may lack information about market conditions and may then sell their crops to intermediaries at below-market prices.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Against this backdrop, the state governments and local partners in southern India teamed up with Microsoft to develop predictive AI services to help smallholder farmers to improve their crop yields and give them greater price control. Since 2016 three applications have been developed and applied for use in these communities, two of which are discussed in this case study: the AI-sowing app and the price forecasting model.&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf"&gt;Click to read&lt;/a&gt; the report here.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Arindrajit Basu, Siddharth Sonkar and Pranav M B</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-16T13:41:02Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines">
    <title>Ethics and Human Rights Guidelines for Big Data for Development Research</title>
    <link>https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines</link>
    <description>
        &lt;b&gt;This is a four-part review of guideline documents for ethics and human rights in big data for development research. This research was produced as part of the Big Data for Development network supported by International Development Research Centre, Canada&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Part #1 - Review of Principles of Ethics in Biomedical Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/biomedicalscience" class="internal-link" title="CIS_BD4D_Guideline01_MS+AS_BiomedicalScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #2 - Review of Principles of Ethics in Computer Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/computerscience" class="internal-link" title="CIS_BD4D_Guideline02_RS+AS_ComputerScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #3 - Summary of Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/AIEthicsReview" class="internal-link" title="CIS_BD4D_Guideline03_AS+PT_BigDataAIEthicsReview_SummaryNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #4 - Extended Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/ExtendedNotes" class="internal-link" title="CIS_BD4D_Guideline04_PT+PB_BigDataAIEthicsReview_ExtendedNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The rapid expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “big data”; though there is no single agreed upon definition of the term. Big data promises to provide new insights and solutions across a wide range of sectors. Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. The predecessor disciplines of data science such as computer sciences, applied mathematics, and statistics have traditionally managed to stay out of the scope of ethical frameworks, based on the assumption that they do not involve humans as subject of their research. While critical study into big data is still in its infancy, there is a growing belief that there are significant discontinuities between the rapid growth in big data and the ethical framework that exists to govern its use. In this set of documents, we look at them in detail.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines'&gt;https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Manjri Singh, Rajashri Seal, Pranav Bhaskar Tiwari, Pranav M Bidare</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>BD4D</dc:subject>
    
    
        <dc:subject>RAW Research</dc:subject>
    
    
        <dc:subject>Big Data for Development</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-05-20T07:56:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age">
    <title>Ethical Data Design Practices in the AI (Artificial Intelligence) Age</title>
    <link>https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age</link>
    <description>
        &lt;b&gt;Shweta Mohandas was a panelist at discussion on Ethical Data Design Practices in the AI (Artificial Intelligence) Age, organised by Startup Grind, Bangalore on July 28, 2018 at NUMA Bangalore. &lt;/b&gt;
        &lt;h2&gt;Agenda&lt;/h2&gt;
&lt;p&gt;&lt;b&gt;Ethical Data Design Practices in the Age&lt;/b&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;The panel discussion is intended to explore the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.&lt;/p&gt;
&lt;p dir="ltr"&gt;Discussion centred around how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand current thinking by the AI community on ethics and morality in computing and the challenges it presents. &lt;/li&gt;
&lt;li&gt;Explore examples of the ethical choices that products make now and will make in the near future.&lt;/li&gt;
&lt;li&gt;Learn how designers might approach designing experiences that face moral dilemmas.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age'&gt;https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-08-01T23:14:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward">
    <title>Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward</title>
    <link>https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward</link>
    <description>
        &lt;b&gt;On July 13, 2019, Radhika Radhakrishnan, participated in a roundtable discussion on "Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward." The event was organized by HEaL (Health, Ethics, and Law Institute of Training, Research and Advocacy) of FMES (Forum for Medical Ethics Society) in collaboration with CPS (Centre for Policy Studies), Indian Institute of Technology-Bombay.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika chaired a session on the ethics of AI in healthcare in India,       and my main submissions included: the medicalization of and       experimentation on women's bodies under a medical-industrial       complex for the design of AI-based healthcare models, and FAT       (Fairness, Accountability, Transparency) concerns with AI. She was also invited to draft some of this content into a       paper submission to the &lt;a href="https://ijme.in/"&gt;Indian Journal of Medical Ethics&lt;/a&gt; which is a peer-reviewed and indexed academic journal run by FMES.&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward'&gt;https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:47:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance">
    <title>Emergence of Chinese Technology:Rising stakes for innovation, competition and governance</title>
    <link>https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance</link>
    <description>
        &lt;b&gt;Omidyar Network in partnership with the Esya Centre organized a private discussion on the theme “Emergence of Chinese technology - rising stakes for innovation, competition and governance” on Monday, 12 August 2019 in New Delhi. Arindrajit Basu attended the event. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;China Ascendant: Soft Power report by ON focuses on three prongs of power-digital power, fore power and sharp power. Standards have been a major avenue for proliferation of Chinese competition.This is combined with knowledge transfer as 2.8 million Chinese students in the US have largely returned to tech companies in China. Core strength is still not in basic research so by 2020, aiming for 15 per cent of PhD.s to be in basic research. China uses nudges in shaping global governance outcomes by targeting the right stakeholders as opposed to altering the ground rules entirely,  Universities in China have focused on how cultural connections can be linked upto negotiating prowess at multilateral fora.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;China takes a whole of government approach to technology innovation. Continues to be consumer focused.&lt;/li&gt;
&lt;li&gt;China does not look at India as a R+D partner,more as a market.Stability and unpredictability has been an issue.None of India's tech policies were drafted with China in mind.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance'&gt;https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-19T14:03:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence">
    <title>Discrimination in the Age of Artificial Intelligence </title>
    <link>https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence</link>
    <description>
        &lt;b&gt;The dawn of Artificial Intelligence (AI) has been celebrated by both government and industry across the globe. AI offers the potential to augment many existing bureaucratic processes and improve human capacity, if implemented in accordance with principles of the rule of law and international human rights norms. Unfortunately, AI-powered solutions have often been implemented in ways that have resulted  in the automation, rather than mitigation, of existing societal inequalities.&lt;/b&gt;
        &lt;p&gt;This was originally published by &lt;a class="external-link" href="http://ohrh.law.ox.ac.uk/discrimination-in-the-age-of-artificial-intelligence/"&gt;Oxford Human Rights Hub&lt;/a&gt; on October 23, 2018&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ArtificialIntelligence.jpg/@@images/3b551d39-e419-442c-8c9d-7916a2d39378.jpeg" alt="Artificial Intelligence" class="image-inline" title="Artificial Intelligence" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Image Credit: Sarla Catt via Flickr, used under a Creative Commons license available at https://creativecommons.org/licenses/by/2.0/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the international human rights law context, AI solutions pose a  threat to norms which prohibit discrimination. International Human  Rights Law &lt;a href="https://books.google.co.in/books/about/International_Human_Rights_Law.html?id=YkcXAgAAQBAJ&amp;amp;redir_esc=y"&gt;recognizes that discrimination&lt;/a&gt; may take place in two possible ways, directly or indirectly. Direct  discrimination occurs when an individual is treated less favourably than  someone else similarly situated on one of the grounds prohibited in  international law, which, as per the &lt;a href="http://www.equalrightstrust.org/ertdocumentbank/Human%20Rights%20Committee,%20General%20Comment%2018.pdf"&gt;Human Rights Committee,&lt;/a&gt; includes race, colour, sex, language, religion, political or other  opinion, national or social origin, property, birth or other status.  Indirect discrimination occurs when a policy, rule or requirement is  ‘outwardly neutral’ but has a disproportionate impact on certain groups  that are meant to be protected by one of the prohibited grounds of  discrimination. A clear example of indirect discrimination recognized by  the European Court of Human Rights arose in the case of &lt;a href="http://www.errc.org/cikk.php?cikk=3559"&gt;&lt;i&gt;DH&amp;amp;Ors v Czech Republic&lt;/i&gt;&lt;/a&gt;.  The ECtHR struck down an apparently neutral set of statutory rules,  which implemented a set of tests designed to evaluate the intellectual  capability of children but which resulted in an excessively high  proportion of minority Roma children scoring poorly and consequently  being sent to special schools, possibly because the tests were blind to  cultural and linguistic differences. This case acts as a useful analogy  for the potential disparate impacts of AI and should serve as useful  precedent for future litigation against AI-driven solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Indirect discrimination by AI may occur &lt;a href="https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf"&gt;at two stages&lt;/a&gt;. First is the &lt;b&gt;usage of incomplete or inaccurate training data&lt;/b&gt; that results in the algorithm processing data that may not accurately reflect reality. Cathy O’Neil explains this &lt;a href="https://weaponsofmathdestructionbook.com/"&gt;using a simple example&lt;/a&gt;.  There are two types of crimes-those that are ‘reported’ and others that  are only ‘found’ if a policeman is patrolling the area. The first  category includes serious crimes such as murder or rape while the second  includes petty crimes such as vandalism or possession of illicit drugs  in small quantities. Increased police surveillance in areas in US cities  where Black or Hispanic people reside lead to more crimes being ‘found’  there. Thus, data is likely to suggest that these communities commit a  higher proportion of crimes than they actually do – indirect  discrimination that has been empirically been shown through research  published by &lt;a href="https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say"&gt;Pro Publica&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Discrimination may also occur at the stage of &lt;b&gt;data processing&lt;/b&gt;, which is done through a metaphorical &lt;a href="https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/"&gt;‘black-box’&lt;/a&gt; that accepts inputs and generates outputs without revealing to the  human developer how the data was processed. This conundrum is compounded  by the fact that the algorithms are often utilised to solve an  amorphous problem-which attempts to break down a complex question into a  simple answer. An example is the development of ‘risk profiles’ of  individuals for the  &lt;a href="http://fortune.com/longform/ai-bias-problem/"&gt;determination of insurance premiums.&lt;/a&gt; Data might show that an accident is more likely to take place in inner  cities due  to more densely packed populations in these areas. Racial  and ethnic minorities tend to reside more in these areas, which means  that algorithms could learn that minorities are more likely to get into  accidents, thereby generating an outcome (‘risk profile’) that  indirectly discriminates on grounds of race or ethnicity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It would be wrong to ignore discrimination, both direct and indirect,  that occurs as a result of human prejudice. The key difference between  that and discrimination by AI lies in the ability of other individuals  to compel the decision-maker to explain the factors that lead to the  outcome in question and testing its validity against principles of human  rights. The increasing amounts of discretion and, consequently, power  being delegated to autonomous systems mean that principles of  accountability which audit and check indirect discrimination need to be  built into the design of these systems. In the absence of these  principles, we risk surrendering core tenets of human rights law to the  whims of an algorithmically crafted reality.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence'&gt;https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-10-26T14:47:57Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake">
    <title>Deepfakes: Algorithms at war, trust at stake</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake</link>
    <description>
        &lt;b&gt;A case in point is the video that surfaced of an Indian journalist not so long ago.&lt;/b&gt;
        &lt;p&gt;The article by Rajmohan Sudhakar was published in &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-on-the-move/deepfakes-algorithms-at-war-trust-at-stake-747042.html"&gt;Deccan Herald&lt;/a&gt; on July 14, 2019. Elonnai Hickok was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Now machines are learning to manipulate imagery. That is a real worry. Deepfakes for instance. They are AI-manipulated videos achieved by machine learning. Products of the humongous volume of images and videos now available online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The danger is, this imagery could be yours or mine. Imagine artificial intelligence of neural networks creating convincing identities of our real counterparts, and starts posting videos. Absurd.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Society has grappled with spurious and specious content in media over time. Media has been modified for various reasons, usually by those with access to significant resources and influence in the past,” says Elonnai Hickok, COO of the Bengaluru-based Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From an AI and machine learning perspective, deepfakes could be understood by what is known as GAN -- generative adversarial networks, essentially two algorithms at war. One is a generator, the other a discriminator. They compete with each other based on set inputs, in time bettering the version they together help create. These are behind what are now known as deepfakes of popular figures floating around online. Barack Obama is seen saying in a purported deepfake, “stay woke bitches”, which of course he did not say.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another deepfake has Mark Zuckerberg boasting: “I have total control of billions of people’s stolen data, all their secrets, their lives, their futures.” “Deepfakes are media modified by current technology and techniques. Easy availability of technology and media allows anyone to create, tailor or manipulate media for their own ends. Deepfakes present an opportunity for introspection and research into the contours of freedom of expression as well as societal frameworks for dealing with fake content,” explains Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the horrid instances of a deepfake-like attack was the video that surfaced of an Indian woman journalist not so long ago. Or the child-kidnapping rumours that spread through WhatsApp and the subsequent mob lynchings. However, there’s the view that in post-truth times, deepfakes would be seen with caution in the inherent dilemma over believing what one views online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“In India, people do not take these so seriously, especially on social media. It is mostly entertainment for many. Now, we are seeing people with diametrically opposing views. They often view content which they like to see. It would rather work as a reinforcer of views than a transformer,” feels political analyst Sandeep Shastri.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Open source software can create basic deepfakes if someone wanted to hurt somebody. The potential scale of danger and damage looms larger for influential figures and nations at war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“While deep fakes can be used to damage societies, it is important that collectively society takes steps to become sensitised to ways that media can be used to manipulate opinions and choices, and allow people to develop skills that build awareness and context to what they see and believe,” adds Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A video emerged recently of an ‘Iranian’ boat near an attacked oil tanker in the Persian Gulf. Deepfake or not, the authenticity of the video was questionable. If used wily, it could have triggered a war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hickok, society has to get more resilient to manipulation. “This includes spoken, written, seen as well as heard information. We have to learn to question the basis on which we confirm trust. Multiple forms of verification may help to address spurious media and information,” she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Deepfakes are no surprise as social media feed into the small and large divisions and differences of multitudes. Emergence of such potentially dangerous AIs isn’t taken quite seriously by the tech czars. In fact, it is a matter of economy for them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Oscar Schwartz writes in The Guardian that ‘technological solutionism’ in the ‘attention economy’ may not be the real approach. “And herein lies the problem: by formulating deepfakes as a technological problem, we allow social media platforms to promote technological solutions to those problems – cleverly distracting the public from the idea that there may be more fundamental problems with powerful Silicon Valley tech platforms,” Schwartz warns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The measures do not fall on the regulators alone. I think, individuals (by introspection and building awareness), society (through education), the legal system (stringent evidentiary requirements and capacity building) industry (differentiating recreational and prejudicial content, tagging content that is manipulated, etc.) and regulators (enabling accountability, oversight, transparency and redress) can all contribute to a more resilient society,” observes Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, viewing a video is still considered close to truth, almost sacred by the vast majority. Necessarily, it would not require a technologically advanced deepfake, especially in the backward rural pockets, to rile up and aggravate biases and prejudices.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Deepfakes can further existing biases and manipulate opinions and choices. They can disrupt trust inherent in societal groups to co-exist and politically, they can breed distrust in leadership and capability. That said, deepfakes can be used for humour and satire. Ultimately, the impact will be shaped by a number of factors including pre-existing biases, individual response, etc.,” Hickok elaborates.&lt;/p&gt;
&lt;p&gt;On a lighter note, deepfakes could be helpful too. We could very well do away with some of our television news presenters.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake'&gt;https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rajmohan Sudhakar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:42:12Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
