<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/internet-governance/blog/online-anonymity/search_rss">
  <title>We are anonymous, we are legion</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 161 to 175.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/about/newsletters/september-2019-newsletter"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/participation-in-the-meeting-of-litd-17-at-bis"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/policy-design-jam"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival">
    <title>AI for Good</title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</link>
    <description>
        &lt;b&gt;CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.&lt;/b&gt;
        &lt;p&gt;The report was edited by Elonnai Hickok.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.&lt;/p&gt;
&lt;h3&gt;Methodology&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Responses&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Analysis&lt;/h3&gt;
&lt;p&gt;Even as the responses were varied, they had a few key similarities and observations.&lt;/p&gt;
&lt;h3&gt;Participants’ Familiarity with AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Perception of AI Among Participants&lt;/h3&gt;
&lt;p class="Normal1"&gt;With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.&lt;/span&gt; &lt;a name="fr1"&gt;&lt;/a&gt; &lt;span&gt;Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. &lt;/span&gt;&lt;span&gt;By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.&lt;/span&gt;&lt;a name="fr2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Form and Visual Design of the AI Concepts&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Accessibility of the Interfaces&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.&lt;a name="fr3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Biases Based on Gender&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.&lt;a name="fr4"&gt;&lt;/a&gt; &lt;span&gt;Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.&lt;/span&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concerns around Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Choices between Principles&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;[1]. &lt;/span&gt;&lt;a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html"&gt;https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[2]. &lt;a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/"&gt;https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[3]. &lt;a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition"&gt;https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[4]. &lt;a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied"&gt;https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'&gt;https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Saumyaa Naidu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-13T05:32:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes">
    <title>Designing a Human Rights Impact Assessment for ICANN’s Policy Development Processes</title>
    <link>https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes</link>
    <description>
        &lt;b&gt;As co-chairs of Cross Community Working Party on Human Rights (CCWP-HR) at International Corporation of Names and Numbers (ICANN), Akriti Bopanna and Collin Kurre executed a Human Rights Impact Assessment for ICANN's processes. It was the first time such an experiment was conducted, and unique because of being a multi-stakeholder attempt. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;This report outlines the iterative research-and-design process carried  out between November 2017 and July 2019, focusing on successes and  lessons learned in anticipation of the ICANN Board’s long-awaited  approval of the Work Stream 2 recommendations on Accountability. The  process, findings, and recommendations will be presented by Akriti and  Austin at CCWP-HR’s joint session with the Government Advisory Council  at ICANN66 in Montreal during 2nd-8th November.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Click to download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes"&gt;full research paper here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes'&gt;https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Collin Kure, Akriti Bopanna and Austin Ruckstuhl</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-03T14:43:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files">
    <title>AI: Full Spectrum Regulatory Challenge Launch Workshop [Reference Files]</title>
    <link>https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files</link>
    <description>
        &lt;b&gt;These are the files released at the AI Full Spectrum Regulatory Challenge Launch Event, organised by CIS, and CCG-NLUD on September 27 2019. At the event, Sunil Abraham discussed the draft policy brief linked below, which is an output of the Regulatory Practices Lab at CIS.&lt;/b&gt;
        
&lt;p&gt;The Event poster can be found &lt;a href="https://cis-india.org/internet-governance/ai-reg-paper-event-files/ai-rpl-poster-06" class="internal-link" title="AI RPL Poster"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Infographic in the Policy brief can be found &lt;a href="https://cis-india.org/internet-governance/ai-reg-paper-event-files/ai-full-spectrum-regulatory-challenge-twitter" class="internal-link" title="AI Full Spectrum Regulatory Challenge Infographic"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The working draft that was released at the workshop can be found &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files'&gt;https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>pranav</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2020-08-04T06:08:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft">
    <title>Artificial Intelligence: a Full-Spectrum Regulatory Challenge [Working Draft]</title>
    <link>https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
&lt;p&gt;Today, there are certain misconceptions regarding the regulation of AI. Some corporations would like us to believe that AI is being developed and used in a regulatory vacuum. Others in civil society organisations believe that AI is a regulatory circumvention strategy deployed by corporations. As a result, these organisations call for onerous regulations targeting corporations. However, some uses of AI by corporations can be completely benign and some uses AI by the state can result in the most egregious human rights violations. Therefore policy makers need to throw every regulatory tool from their arsenal to unlock the benefits of AI and mitigate its harms.&lt;/p&gt;
&lt;p&gt;This policy brief proposes a granular, full spectrum approach to the regulation of AI depending on who is using AI, who is impacted by that use and what human rights are impacted. Everything from deregulation, to forbearance, to updated regulations, to absolute and blanket prohibitions needs to be considered depending on the specifics. This approach stands in contrast to approaches of ethics, omnibus law, homogeneous principles, and human rights, which will result in inappropriate under-regulation or over-regulation of the sector.&lt;/p&gt;
&lt;p&gt;Find a copy of the working draft &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft'&gt;https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-04T06:10:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying">
    <title>Setting International Norms of Cyber Conflict is Hard, But that Doesn't Mean that We Should Stop Trying</title>
    <link>https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying</link>
    <description>
        &lt;b&gt;Last month, cyber-defense analyst and geostrategist Pukhraj Singh penned a stinging epitaph, published by MWI, for global norms-formulation processes that are attempting to foster cyber stability and regulate cyber conflict—specifically, the Tallinn Manual.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Arindrajit Basu and Karan Saini was published by &lt;a class="external-link" href="https://mwi.usma.edu/setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying/"&gt;Modern War Institute&lt;/a&gt; on September 30, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;His words are important, and should be taken seriously by the legal and technical communities that are attempting to feed into the present global governance ecosystem. However, many of his arguments seem to suffer from an unjustified and dismissive skepticism of any form of global regulation in this space.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He believes that the unique features of cyberspace render governance through the application of international law close to impossible. Given the range of developments that are in the pipeline in the global cyber norms proliferation process, this is an excessively defeatist attitude toward modern international relations. It also unwittingly encourages the continued weaponization of cyberspace by fomenting a “no holds barred” battlespace, to the detriment of the trust that individuals can place in the security and stability of the ecosystem.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;“The Fundamentals of Computer Science”&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh argues that the “fundamentals of computer science” render rules of international humanitarian law (IHL)—which serve as the governing framework during armed conflict in other domains—inapplicable, and that lawyers and policymakers have gotten cyber horribly wrong. Singh theorizes that in the case of the United States having pre-positioned espionage malware in Russian military networks, that malware could have been “repurposed or even reinterpreted as an act of aggression.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The possibility of a fabricated act of espionage being used as justification for an escalated response exists within the realm of analogous espionage, too. A reconnaissance operation that has been compromised can also be repurposed midway into a full-blown armed attack, or could be reinterpreted as justification for an escalatory response. However, &lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;i&lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;nternational &lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;l&lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;aw states&lt;/a&gt; that self-defense can only be exercised when the “necessity of self-defense is instant, overwhelming, leaving no choice of means, and no moment of deliberation.” In order to legitimize any action taken under the guise of self-defense, the threat would have to be imminent and the response both necessary and proportionate. There is nothing inherently unique in the nature of cyber conflict that would render the traditional law of self-defense moot.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Further, the presumption that cyber operations are ambiguous and often uncontrollable, as Singh suggests, is flawed. An exploit that is considered “deployment-ready” is the result of an attacker’s attempts at fine-tuning variables—until it is determined that the particular vulnerability can be exploited in a manner that is considered to be reasonably reliable. An exploit may have to be worked upon for quite some time for it to behave exactly how the attacker intends it to. While it is true that there still may be unidentified factors that can potentially alter the behavior of a well-developed exploit, a skilled operator or malware author would nonetheless have a reasonable amount of certainty that an exploit code’s execution will result in the realization of only a certain possible set of predefined outcomes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is true that a number of remote exploits that target systems and networks &lt;a href="https://media.blackhat.com/bh-us-10/whitepapers/Meer/BlackHat-USA-2010-Meer-History-of-Memory-Corruption-Attacks-wp.pdf"&gt;may make use of&lt;/a&gt; unreliable vulnerabilities, where outcomes &lt;a href="https://googleprojectzero.blogspot.com/2015/06/what-is-good-memory-corruption.html"&gt;may not be fully apparent&lt;/a&gt; prior to execution—and sometimes even afterward. However, for most deployment-ready exploits, this would simply not be the case. In fact, the example of the infamous Stuxnet malware, which Singh uses in his article, helps buttress our point.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh questions whether India should have interpreted the &lt;a href="https://www.indiatoday.in/india/north/story/stuxnet-cyber-war-critical-infrastructure-of-india-ntro-115273-2012-09-05"&gt;widespread infection of systems&lt;/a&gt; within the region—which also happened to affect certain critical infrastructure—as an armed attack. This question can cursorily be dismissed since we now know that Stuxnet did not cause any deliberate damage to Indian computing infrastructure. A &lt;a href="https://www.reuters.com/article/us-usa-cyberweapons-specialreport/special-report-u-s-cyberwar-strategy-stokes-fear-of-blowback-idUSBRE9490EL20130510"&gt;2013 report by journalist Joseph Menn&lt;/a&gt; correctly states that &lt;span style="text-decoration: underline;"&gt;“the only place deliberately affected [by Stuxnet] was an Iranian nuclear facility.&lt;/span&gt;” Therefore, for India to claim mere infection of systems located within the bounds of its territory as having been an armed attack, it would have to concretely demonstrate that the operators of Stuxnet caused “grave harm”—as described in IHL—purely by way of having infected those machines, through execution of malicious instructions programmed in the malware’s payload.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At the same time, it should not be dismissed that the act of the Stuxnet malware infecting a machine could very well be interpreted by a state as constituting an armed attack. However, given the current state of advancement in malware decompilation and reverse-engineering studies, the process of deducing instructions that a particular malicious program seeks to execute can in most cases be performed in a reasonably reliable manner. Thus, for a state to make such a claim, it would have to prove that the malware did indeed cause grave harm, that which meets the criteria of the “scale and effects” threshold laid down in &lt;em&gt;Nicaragua v. United States&lt;/em&gt;—whether it was caused due to operator interaction or preprogrammed instructions—along with sufficient reasoning and evidence for attributing it to a state.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;An analysis of the Stuxnet code made it apparent that operators were seeking out machines that had the Siemens STEP 7 or SIMATIC WinCC software installed. The authors of the malware quite clearly had prior knowledge that the nuclear centrifuges that they intended to target made use of a particular type of programmable logic controllers, which the STEP 7 and WinCC software interacted with. On the basis of this prior knowledge, the authors of Stuxnet &lt;a href="https://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf"&gt;made design choices&lt;/a&gt; by which, upon infection, target machines would communicate to the Stuxnet command-and-control server—including identifiers such as operating system version, IP address, workstation name, and domain name—whether or not the infected system had the STEP 7 or WinCC software installed. This allowed the operators of Stuxnet to easily identify and distinguish machines that they would ultimately attack for fulfilling their objectives. In effect, this gave them some amount of control over the scale of damage they would deliberately cause.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It has been &lt;a href="https://www.cnet.com/news/stuxnet-delivered-to-iranian-nuclear-plant-on-thumb-drive/"&gt;theorized&lt;/a&gt; that the malware reached the nuclear facility in Iran through a flash drive. It may be true that widespread and unnecessary propagation of the worm—which could be described as it “going out of control”—was not something the operators had intended (as it would attract unwanted attention and raise alarm bells across the board). It has nonetheless been several years since Stuxnet was in action, and there have been no documented cases of Stuxnet having caused &lt;em&gt;grave harm&lt;/em&gt; to Indian (or other) computers. For all purposes, it could be said that the risk of collateral damage was minimized as the control operators were able to direct the execution of damaging components of the malware, to a degree that could be interpreted as having complied with IHL—thereby making it a &lt;em&gt;calculated&lt;/em&gt; cyberattack, with &lt;em&gt;controllable&lt;/em&gt; effects.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, if the adverse effects of the operation were to be indiscriminate (i.e., machines were tangibly damaged immediately upon being infected), and could not be controlled by the operator within reasonable bounds, then the rules of IHL would render the operation illegal—a red line that, among other declarations, the &lt;a href="https://www.justsecurity.org/66194/frances-major-statement-on-international-law-and-cyber-an-assessment/"&gt;recent French statement&lt;/a&gt; on the application of international law to cyberspace recognizes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;“Bizarre and Regressive”: The Westphalian Precept of Territoriality&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh’s next grievance is with the precept of territoriality and sovereignty in cyberspace. However, the reasoning he provides decrying this concept is unclear at best. The International Group of Experts authoring the Tallinn Manual argued that “cyber activities occur on territory and involve objects, or are conducted by persons or entities, over which States may exercise their sovereign prerogatives.” They continued to note that even though cyber operations can transcend territorial domains, they are conducted by “individuals and entities subject to the jurisdiction of one or more state.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Contrary to Singh’s assertions, our reasoning is entirely in line with the “defend forward” and “persistent engagement” strategies adopted by the United States defense experts. In fact, Gen. Paul Nakasone, commander of US Cyber Command—&lt;a href="https://www.schneier.com/blog/archives/2019/02/gen_nakasone_on.html"&gt;whose interview&lt;/a&gt; Singh cites to explain these strategies—explicitly states in that interview that “we must ‘defend forward’ in cyberspace as we do in the physical domains. . . . [Naval and air forces] patrol the seas and skies to ensure that they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace.” This is a recognition of the Westphalian precept of territoriality in cyberspace—which includes the right to take pre-emptive measures against adversaries before the people and objects within a nation’s sovereign borders are negatively impacted.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Below-the-Threshold Operations&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh also argues that most cyber operations would not reach the threshold armed attack to invoke IHL. He concludes, therefore, that applying the rules of IHL “bestows another garb of impunity upon rogue cyber attacks.” However, as discussed above, the application of IHL does not require a certain threshold of intensity, but the mere application of armed force that is attributable to a state.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore, laying down “red lines” by, for example, applying the &lt;a href="https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule1"&gt;principle of distinction&lt;/a&gt;, which seeks to minimize damage to civilian life and property, actually works toward setting legal rules that seek to prevent the negative civilian fallout of cyber conflict. There appears to be no reason why any cyberattack by a state should harm civilians without the state using all means possible to avoid this harm. If there is an ongoing armed conflict, this entails compliance with the IHL principles of &lt;a href="https://gsdrc.org/topic-guides/international-legal-frameworks-for-humanitarian-action/concepts/overview-of-international-humanitarian-law/"&gt;necessity and proportionality&lt;/a&gt;, ensuring that any collateral damage ensuing as a result of an operation is proportionate to the military advantage being sought.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Moreover, we agree that certain information operations may not cause any damage in terms of injury to human life or property. But IHL is not the only framework for governing cyber conflict. Ongoing cyber norms proliferation efforts are attempting to move beyond the rigid application of international law to account for the unique challenges of cyberspace. Despite the flaws in the process thus far, individuals from a variety of backgrounds and disciplines must engage meaningfully and shape effective regulation in this space. Singh’s “garb of impunity” exists when there are a lack of restrictions on collateral damage caused by cyber operations, to the detriment of civilian life and property alike.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Obstacles in Developing Customary International Law&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;His third argument is on the fetters limiting the development of customary international law in the cyber domain. This is a valid concern. Until recently, most states involved in cyber operations have adopted a stance of silence and ambiguity with regard to their legal position on the applicability of international law in cyberspace or their position on the Tallinn Manual.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is due to&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt; multiple reasons&lt;/a&gt;: First, states are not certain if the rules of the Tallinn Manual protect their long-term interests of gaining covert operational advantages in the cyber domain, which acts as a disincentive for strongly endorsing the rules laid out therein. Second, even those states keen on applying and adhering to the manual may not be able to do so in the absence of technical and effective processes that censure other states that do not comply. Given this ambiguity, states have demonstrated a preference to engage in cyber operations and counteroperations that are below the threshold—in other words, those that do not bring IHL into play. However,&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt; as &lt;/a&gt;&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt;others have convincingly argued&lt;/a&gt;, it is incorrect to assume that the current trend of silence and ambiguity will continue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Recent developments indicate that the variety of normative processes and actors alike may render the Tallinn Manual more relevant as a focal point in the discussions. &lt;a href="https://www.gov.uk/government/speeches/cyber-and-international-law-in-the-21st-century"&gt;The &lt;/a&gt;&lt;a href="https://www.gov.uk/government/speeches/cyber-and-international-law-in-the-21st-century"&gt;UK&lt;/a&gt;, &lt;a href="https://www.lawfareblog.com/frances-cyberdefense-strategic-review-and-international-law"&gt;France,&lt;/a&gt; &lt;a href="https://www.lawfareblog.com/germanys-position-international-law-cyberspace"&gt;Germany&lt;/a&gt;, &lt;a href="https://www.justsecurity.org/64490/estonia-speaks-out-on-key-rules-for-cyberspace/"&gt;Estonia&lt;/a&gt;, &lt;a href="https://www.justsecurity.org/wp-content/uploads/2017/06/Cuban-Expert-Declaration.pdf"&gt;Cuba&lt;/a&gt; (backed by China and Russia), and the &lt;a href="https://www.justsecurity.org/wp-content/uploads/2016/11/Brian-J.-Egan-International-Law-and-Stability-in-Cyberspace-Berkeley-Nov-2016.pdf"&gt;United States&lt;/a&gt; have all engaged in public posturing in advocacy of their respective positions regarding the applicability of international law in cyberspace, in varying degrees of detail—which is essentially customary international law in the making. The statements made by a number of delegations at the recently concluded&lt;a href="https://twitter.com/RungRage/status/1176732729615908864"&gt; first substantive session&lt;/a&gt; of the United Nations’ Open-Ended Working Group covered a broad range of issues, from capacity building to the application of international law, which is the first step towards fostering consensus among the variety of global actors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Positive Conflict and the Future of Cyber Norms&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The final argument—a theme that runs from the beginning of Singh’s article—is a stark criticism of Western-centric cyber policy processes. Despite attempts to foster inclusivity, efforts like those that produced the Tallinn Manual are still driven largely from and by the United States in an attempt to, as Singh describes it, keep “cyber offense fully potentiated.” This is an unfortunate reality, but one that is not limited solely to the cyber domain. For example, in an &lt;a href="https://people.duke.edu/~pfeaver/dunlap.pdf"&gt;excellent paper&lt;/a&gt; written in 2001, retired US Air Force Maj. Gen. Charles Dunlap explained “that ‘lawfare,’ that is, the use of law as a weapon of war, is the newest feature of 21st century combat.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We are presented therefore with two options: either sit back and witness the hegemonization of policy discourse by a limited number of powerful states, or actively seek to contest these assumptions by undertaking adversarial work across standards-setting bodies, multilateral and multi-stakeholder norms-setting forums, as well as academic and strategic settings. In &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2916171"&gt;a recent paper&lt;/a&gt;, international law scholar Monica Hakimi argues that international law can serve as a fulcrum for facilitating positive conflict in the short run between a variety of actors across industry, civil society, and military and civilian government entities, which can lead to the projection of shared governance endeavors in the long run. Despite its several flaws, the Tallinn Manual can serve as a this type of fulcrum for facilitating this conflict.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In writing a premature eulogy of efforts to bring to realization a set of norms in cyberspace, Singh dismisses that historically, &lt;a href="https://cis-india.org/internet-governance/files/gcsc-research-advisory-group.pdf"&gt;global governance regimes&lt;/a&gt; have taken considerable time  and effort to come into being and emerge after an arduous process of continuous prodding and probing. This process necessitates that any existing assumptions—and the bases on which they are constructed—are challenged regularly, so that we can enumerate and ultimately arrive at an agreeable definition for what works and what does not. Rejecting these processes in their entirety foments a global theater of uncertainty, with no benchmarks for cooperation that stakeholders in this domain can reasonably rely on.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying'&gt;https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Karan Saini</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-14T15:04:03Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media">
    <title>SC directs govt to further regulate social media: Is it necessary? Experts weigh in</title>
    <link>https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media</link>
    <description>
        &lt;b&gt;With the SC's directive to the Indian government for further regulation of social media, TNM asked experts what were the challenges associated with the same.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Geetika Mantri was published in the &lt;a class="external-link" href="https://www.thenewsminute.com/article/sc-directs-govt-further-regulate-social-media-it-necessary-experts-weigh-109662"&gt;News Minute&lt;/a&gt; on September 28, 2019. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;The Supreme Court recently &lt;a href="https://www.hindustantimes.com/india-news/strike-a-balance-says-supreme-court-to-centre-seeks-status-report-in-3-weeks-on-framing-of-social-media-regulations/story-djEnQ62Uue407iCMPZcagK.html" target="_blank"&gt;expressed&lt;/a&gt; the need to regulate social media to curb fake news, defamation and trolling. It also asked the Union government to come up with guidelines to prevent misuse of social media while protecting users’ privacy in three weeks’ time.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The apex court made these statements while hearing a transfer petition by Facebook which has asked for petitions on regulation of social media filed in Madras, Bombay and Madhya Pradesh High Courts on similar issues to be transferred to the SC so that the scope can be expanded.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, social media platforms already come under the purview of the Information Technology (IT) Act, the ‘intermediaries guidelines’ that were notified under the IT Act in 2011 and the Indian Penal Code.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With the SC's directive to the Indian government for further regulation of social media, TNM asked experts what were the challenges associated with the same.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Existing regulations and misuse&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Executive Director of the Internet Freedom Foundation (which is also an intervenor in the above case in SC) and lawyer Apar Gupta points out that under existing laws, social media channels are already required to take down content if they are directed to do so by a court or law enforcement.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are also reporting mechanisms on these platforms, where they exercise discretion to ascertain whether a reported post is violating community guidelines and needs to be taken down. These, however, have been reported to be arbitrary – many posts on body positivity and menstruation, for instance, have been taken down in the past while other explicit imagery continues to be allowed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“But it’s necessary to have minimum legal standards that need to be fulfilled to compel such take-downs on social media. If platforms had to take down posts based on individual complaints, it could result in many frivolous take-downs. Free speech should be the norm, and removal of content, the exception,” Apar argues.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;IT consultant Kiran Chandra says that many of the existing regulations themselves are “dangerously close to censorship and may have a chilling effect on freedom of speech, which is why cases are being fought on those in courts.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Even under existing regulations, there is scope for misuse - which has also been &lt;a href="https://www.scoopwhoop.com/jailed-for-40-days-the-story-of-up-teen-who-was-booked-for-sedition-for-his-social-media-posts/" target="_blank"&gt;documented&lt;/a&gt; in the past - to curb dissent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“One of the key problems of a lot of regulatory measures is the vagueness of language which is exploited by state agencies to behave in a repressive way,” Kiran says. “Any regulation has to be clear and concrete so that there is no scope for overreach."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Much of fake news is driven by politics&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Fake news isn't exactly new, but its proliferation and extent have expanded manifold with social media.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Srinivas Kodali, an independent security researcher, says that it is not as though governments do not know where a good portion of fake news is coming from. “Most political parties have IT cells that dedicatedly work on creating and spreading fake news. But what is the Election Commission or anyone else doing to stop that?” he questions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Kiran points out that this machinery exists with a view to gain electoral dividends. “There can be no countering fake news without taking on these structures and the political forces behind them,” he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He adds that social media giants also need to take responsibility. “Currently, considering the role social media companies play in the society, they are doing almost nothing [about fake news]. In fact, virality - and a lot of fake news tends to be viral - is the basis of the business model of many social media companies, including Facebook, and WhatsApp, which it owns. At the very least, these companies need to dedicate far more resources, and must provide more transparency into their functioning if any dent has to be made in countering fake news.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Kiran also says that there is a need to support websites that bust fake news, and make people more aware of the need to verify news.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Defamation and online harassment&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Experts say that when it comes to the SC’s observation that there should be redressal mechanisms for someone who has been ‘defamed’ on social media, the recourse is pretty clear-cut.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Pranesh Prakash, a fellow at the Centre for Internet and Society, says, “If it concerns defamation, it is very likely that the victim knows where the defamatory post has come from. Even if it is not an original message, the defamation law does not require you to find out the origin of such a message. Anyone who has put it, forwarded it, is liable.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That being said, it is the social media giants that need to pick up the slack when it comes to dealing with targeted harassment and online bullying.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It has been reported &lt;a href="https://www.thenewsminute.com/article/fb-does-little-curb-hate-speech-against-muslims-dalits-minorities-study-103475" target="_blank"&gt;earlier &lt;/a&gt;that Facebook, due to its lack of understanding of the Indian context as well as diversity, often fails in effectively removing hate speech from the platform in India. Facebook's community guidelines are unavailable in several Indian languages too.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Kiran says that while there already exist legal provisions for dealing with offensive speech, the problem is that they are either misused or underused. “Critics of the government get hit with these cases unreasonably while many who engage in hate speech and abuse are followed by the most powerful people in the country. Here again, social media firms need to massively increase the resources they spend on weeding out such content.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Privacy and surveillance concerns&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Any conversation on additional regulation of social media brings up concerns about privacy and surveillance.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Apar says if regulators want easy access to user information for curbing misuse, spread of fake news and the like, it would require online platforms to modify their products to increase surveillance - to have exact details about who said what, when and about whom. “This is why it’s important for legal standards and conditions for accessing user information to be followed. Government also needs to become more accountable on what information on users they are demanding from social media companies.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Kiran cautions that “any bid at regulating expression online has to be proportional and concrete with adequate redressal mechanisms and without any blanket provisions.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We need strong data protection and privacy laws which restrict the scope of these companies and reduce their footprint online,” he adds, referring, for instance to Facebook's monopoly - the company also owns Instagram. “Similarly, the role they play in elections and political processes as a whole, needs to be checked.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Srinivas points out that ultimately, social media is a reflection of what is happening in the society: “If there is no rule of law offline, it won’t be there online.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media'&gt;https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Geetika Mantri</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-09-30T14:28:10Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules">
    <title>Comparison of the Manila Principles to Draft of The Information Technology [Intermediary Guidelines(Amendment) Rules], 2018</title>
    <link>https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules</link>
    <description>
        &lt;b&gt;This paper looks at the Manila Principles intermediary liability framework in comparison to the amended draft Information Technology [Intermediaries Guidelines (Amendment)] Rules, 2018 introduced by the Ministry of Electronics and Information Technology (MeitY) in December, 2018. &lt;/b&gt;
        
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;In December 2018, the Ministry of Electronics and Information Technology (MeitY) introduced amendments to the draft Information Technology [Intermediaries Guidelines (Amendment)] Rules, 2018 [“the 2018 Rules”]. The proposed changes ranged from asking intermediaries to proactively filter content using automated technology to prohibiting promotion of substances such as cigarettes and alcohol.&amp;nbsp; In &lt;a class="external-link" href="https://cis-india.org/internet-governance/resources/Intermediary Liability Rules 2018.pdf"&gt;CIS's submission&lt;/a&gt; to the Government, we highlighted our various concerns with the proposed rules. Building on the same, this paper aims to assess how the new draft rules measure up to the best practices on Intermediary Liability as prescribed in the Manila Principles. These principles were formulated in 2015 by a coalition of civil society groups and experts, including CIS, in order to establish best practice to guide policies pertaining to intermediary liability.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Depending on their function, intermediaries have a varying hand in hosting activism and discourse that are integral to a citizen’s right to freedom of speech and expression. The Manila Principles are an attempt at articulating best practices that lead to the development of intermediary liability regimes which respect human rights.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Consequently, the paper examines the draft rules to assess their&amp;nbsp; compatibility with the Manila Principles. It provides recommendations such that, where needed, the rules are aligned with the aforementioned&amp;nbsp; principles. The assessment is done based on the insight into the rationale of the Manila Principles provided in its Background Paper.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Disclosure&lt;/strong&gt;: CIS is a recipient of research grants from Facebook India.&amp;nbsp;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify;"&gt;Click to &lt;a class="external-link" href="https://cis-india.org/internet-governance/files/draft-rules-and-manila-principles-1"&gt;download&lt;/a&gt; the research paper which was edited by Elonnai Hickok and reviewed by Torsha Sarkar.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules'&gt;https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Akriti Bopanna and Gayatri Puthran</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2020-06-01T07:48:17Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/about/newsletters/september-2019-newsletter">
    <title>September 2019 Newsletter</title>
    <link>https://cis-india.org/about/newsletters/september-2019-newsletter</link>
    <description>
        &lt;b&gt;The newsletter for the month of September 2019.&lt;/b&gt;
        
&lt;table class="grid listing"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;Highlights for September 2019&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;ul&gt;
&lt;li style="text-align: justify;"&gt;Centre for Internet &amp;amp; Society's &lt;a class="external-link" href="https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network"&gt;application for membership of the Christchurch Call Advisory Network&lt;/a&gt; has been accepted! As a part of this network, we, along with other civil society groups based out of various jurisdictions, would be providing inputs on making the Call a robust, human rights-centred initiative.&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify;"&gt;A book by Amber Sinha titled '&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/rupa-publications-amber-sinha-the-networked-public"&gt;The Networked Public: How Social Media is Changing Democracy&lt;/a&gt;' was published by Rupa Publications. The book looks at how networks exert unchecked power in subverting political discourse and polarizing the public in India. Towards that, it investigates the history of misinformation and the biases that make the public susceptible to it, how digital platforms and their governance impacts the public’s behaviour in them, as well as the changing face of political targeting in a data-driven ecosystem.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;Akriti Bopanna and Gayatri Puthran co-authored &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules"&gt;a research paper&lt;/a&gt; which compares the Manila Principles to Draft of The Information Technology [Intermediary Guidelines(Amendment) Rules], 2018, introduced by the Ministry of Electronics and Information Technology (MeitY) in December, 2018.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;Gurshabad Grover and Torsha Sarkar along with Rajashri Seal and Neil Trivedi &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations"&gt;co-authored a paper&lt;/a&gt; that examines the constitutionality of the government prohibition on the broadcast of news against private and community FM channels. The authors also mapped chronologically the history of the development of community and private radio channels in India.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;Ambika Tandon and Aayush Rathi &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities"&gt;generated empirical evidence about the CCTV programme well underway in Delhi&lt;/a&gt;. The case study was published by Centre for Development Informatics, Global Development Institute, SEED, in the Development Informatics working paper series housed at the University of Manchester.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;Shruti Trikanand and Amber Sinha published a blog post titled &lt;a class="external-link" href="https://cis-india.org/internet-governance/digital-identity/shruti-trikanand-and-amber-sinha-september-13-2019-core-concepts-processes"&gt;Core Concepts and Processes&lt;/a&gt; by which the authors hope to arrive at a shared vocabulary to discuss and critically analyse digital identity systems, both within our team and in engagements with other stakeholders.&amp;nbsp;Pooja Saxena and Akash Sheshadri contributed to the project.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;The Global Commission on the Stability of Cyberspace released a public consultation process that sought to solicit comments and obtain feedback on the definition of “Stability of Cyberspace”, as developed by the Global Commission on the Stability of Cyberspace (GCSC). &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/arindrajit-basu-and-elonnai-hickok-september-9-2019-submission-to-global-commission-on-stability-of-cyberspace"&gt;CIS gave detailed commentary on the definitions and suggested a new definition of cyber stability&lt;/a&gt;.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;CIS is &lt;a class="external-link" href="https://cis-india.org/raw/digital-domestic-work-india-announcement"&gt;undertaking a study on digital mediation of domestic and care work in India&lt;/a&gt;, as part of and supported by the Feminist Internet Research Network led by the Association for Progressive Communications (APC), funded by the International Development Research Centre (IDRC). The study is exploring the ways in which structural inequalities, such as those of gender and class, are being reproduced or challenged by digital platforms. The project sites are Delhi and Bangalore, where we are conducting interviews with workers, companies, and unions. In Bangalore, we are collaborating with Stree Jagruti Samiti to collect qualitative data from different stakeholders. The outputs of the research will include a report, policy brief, and other communication materials in English, Hindi, and Kannada. This study is being led by Ambika Tandon and Aayush Rathi, along with Sumandro Chattapadhyay.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;CIS-A2K has put up a call for &lt;a class="external-link" href="https://cis-india.org/a2k/blogs/call-for-joining-the-free-knowledge-movement-wikipedia-wikimedia"&gt;joining the Free Knowledge movement&amp;nbsp;#Wikipedia #Wikimedia&lt;/a&gt;.&amp;nbsp;Are you an individual or do you represent any organisation, institution, groups or enterprises? You can actually help the ‘Free Knowledge’ movement by donating photos, media, content or archives.&lt;/li&gt;&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;CIS and the News&lt;/h3&gt;
&lt;p&gt;The following articles and research papers were authored by CIS secretariat during the month:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/ambika-tandon-and-aayush-rathi-gender-it-september-1-2019-doing-standpoint-theory"&gt;Doing Standpoint Theory&lt;/a&gt; (Ambika Tandon and Aayush Rathi; Gender IT.org; September 1, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/blog/business-standard-september-4-2019-shyam-ponappa-traffic-rules-mindset-and-on-time-payments"&gt;Traffic Rules, Mindset and On-Time Payments&lt;/a&gt; (Shyam Ponappa; Business Standard; September 4, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/indian-express-nishant-shah-september-15-2019-kashmirs-digital-blackout-marks-a-period-darker-than-the-dark-side-of-the-moon"&gt;Kashmir’s digital blackout marks a period darker than the dark side of the moon&lt;/a&gt; (Nishant Shah; Indian Express; September 15, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/rupa-publications-amber-sinha-the-networked-public"&gt;The Networked Public: How Social Media Changed Democracy&lt;/a&gt; (Amber Sinha; Rupa Publications; September 19, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities"&gt;Capturing Gender and Class Inequities: The CCTVisation of Delhi&lt;/a&gt; (Aayush Rathi and Ambika Tandon; Centre for Development Informatics, Global Development Institute; September 27, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;h3&gt;CIS in the News&lt;/h3&gt;
&lt;p&gt;CIS secretariat was consulted for the following articles published during the month in various publications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/the-news-minute-september-3-2019-manasa-rao-why-having-more-cctv-cameras-does-not-translate-to-crime-prevention"&gt;Why having more CCTV cameras does not translate to crime prevention &lt;/a&gt;(Manasa Rao; The News Minute; September 3, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/deccan-herald-roshan-nair-september-4-2019-android-10-out-big-on-privacy"&gt;Android 10 out, big on ‘privacy’&lt;/a&gt; (Roshan H. Nair; Deccan Herald; September 4, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/l-actualite-magazine-isabelle-gregoire-september-11-2019-internet-pour-toutes"&gt;Internet pour toutes&lt;/a&gt; (Isabelle Grégoire; L'Actualite; September 11, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/vivek-narayanan-and-r-sivaraman-the-hindu-september-18-2019-chennai-residents-rue-fuzzy-cctv-surveillance"&gt;Chennai residents rue fuzzy CCTV surveillance&lt;/a&gt; (Vivek Narayanan and R. Srinivasan; The Hindu; September 18, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study"&gt;Millions of kids in India access the Net on their parents’ devices, says study&lt;/a&gt; (Varun Aggarwal; Hindu Businessline; September 27, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/the-news-minute-geetika-mantri-september-28-2019-sc-directs-govt-to-further-regulate-social-media"&gt;SC directs govt to further regulate social media: Is it necessary? Experts weigh in&lt;/a&gt; (Geetika Mantri; The News Minute; September 28, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;h2&gt;&lt;a class="external-link" href="https://cis-india.org/a2k"&gt;Access to Knowledge&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify;"&gt;Access to Knowledge is a campaign to promote the fundamental principles of justice, freedom, and economic development. It deals with issues like copyrights, patents and trademarks, which are an important part of the digital landscape.&lt;/p&gt;
&lt;h3 style="text-align: justify;"&gt;Wikipedia&lt;/h3&gt;
&lt;p&gt;Under a grant from Wikimedia Foundation we are doing a project &lt;span style="text-align: justify;"&gt;for the growth of Indic language communities and projects by designing community collaborations and partnerships that recruit and cultivate new editors and explore innovative approaches to building projects.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="text-align: justify;"&gt;&lt;strong&gt;News&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/news/bhuvana-meenakshi-elected-mozilla-rep-for-july-2019-1"&gt;Bhuvana Meenakshi elected Mozilla Rep for July 2019&lt;/a&gt; (Bhuvana Meenakshi was selected as a Rep of the Month (July 2019) by Mozilla for her active contributions).&lt;/li&gt;&lt;/ul&gt;
&lt;h3 style="text-align: justify;"&gt;Openness&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/openness/news/fosscon-india-2019-1"&gt;FOSSCON India 2019&lt;/a&gt; (Organized by KLS Gogte Institute of Technology; Belgaum; August 29 - 31, 2019). Bhuvana Meenakshi gave a talk on "The revolution of WebXR".&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/openness/devfest19"&gt;DevFest'19&lt;/a&gt; (Organized&amp;nbsp;by Google Developers Groups; Coimbatore; September 14, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/openness/news/react-india-2019"&gt;React India 2019&lt;/a&gt; (Organized by React India; Goa; September 26 - 28, 2019).&amp;nbsp;Bhuvana Meenakshi was a speaker.&lt;/li&gt;&lt;/ul&gt;
&lt;h2&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance"&gt;Internet Governance&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify;"&gt;The Tunis Agenda of the second World Summit on the Information Society has defined internet governance as the development and application by governments, the private sector and civil society, in their respective roles of shared principles, norms, rules, decision making procedures and programmes that shape the evolution and use of the Internet. As part of internet governance work we work on policy issues relating to freedom of expression primarily focusing on the Information Technology Act and issues of liability of intermediaries for unlawful speech and simultaneously ensuring that the right to privacy is safeguarded as well.&lt;/p&gt;
&lt;h3&gt;Freedom of Speech &amp;amp; Expression&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;Under a grant from the MacArthur Foundation, CIS is doing research on the restrictions placed on freedom of expression online by the Indian government and contribute studies, reports and policy briefs to feed into the ongoing debates at the national as well as international level. As part of the project we bring you the following outputs:&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Research Papers&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations"&gt;Examining the Constitutionality of the Ban on Broadcast of News by Private FM and Community Radio Stations&lt;/a&gt; (Gurshabad Grover, Torsha Sarkar, Rajashri Seal and Neil Trivedi; September 27, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gayathri-puthran-comparison-of-manila-principles-to-draft-it-intermediary-guidelines-rules"&gt;Comparison of the Manila Principles to Draft of The Information Technology&lt;/a&gt; [Intermediary Guidelines(Amendment) Rules], 2018 (Akriti Bopanna and Gayatri Puthran; September 30, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;News&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet"&gt;Turning off the internet: Chips with Everything podcast&lt;/a&gt; (Gurshabad Grover and Ambika Tandon recorded an episode with the Guardian's podcast on digital culture, called Chips with Everything).&lt;/li&gt;&lt;/ul&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3&gt;Privacy&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;Under a grant from Privacy International and IDRC we are doing a project on surveillance. CIS is researching the history of privacy in India and how it shapes the contemporary debates around technology mediated identity projects like Aadhar. As part of our ongoing research, we bring you the following outputs:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Submission&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify;"&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/arindrajit-basu-and-elonnai-hickok-september-9-2019-submission-to-global-commission-on-stability-of-cyberspace"&gt;Submission to Global Commission on Stability of Cyberspace on the definition of Cyber Stability&lt;/a&gt; (Arindrajit Basu and Elonnai Hickok; September 11, 2019). &lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/policy-design-jam"&gt;Policy Design Jam &lt;/a&gt;(Organized by  Whatsapp and ISPP; Qutub Institutional Area, New Delhi; September 16, 2019). Pallavi Bedi, Akash Sheshadri and Anubha Sinha attended the event.&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today"&gt;Conceptualising India's Digital Policy Vision&lt;/a&gt; (Organized by National University of Juridical Sciences; National University of Juridical Sciences; Kolkata; September 18, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://www.partnershiponai.org/apm/"&gt;All Partners Meeting&lt;/a&gt; (Organized by Partnership on AI; London; September 26 - 27, 2019). Elonnai Hickok reprsented CIS as the co-chair for the Labour and Economy Expert Group.&lt;/li&gt;&lt;/ul&gt;
&lt;h3&gt;Digital Identity&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;Omidyar Network is investing in establishment of a three-region research alliance — to be co-led by the Institute for Technology &amp;amp; Society (ITS), Brazil, the Centre for Intellectual Property and Information Technology Law (CIPIT) , Kenya, and CIS. As part of this Alliance, CIS is examining the policy objectives of digital identity projects, how technological policy choices can be thought through to meet the objectives, and how legitimate uses of a digital identity framework may be evaluated.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Featured Research&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/digital-identity/shruti-trikanand-and-amber-sinha-september-13-2019-core-concepts-processes"&gt;Core Concepts and Processes&lt;/a&gt; (Shruti Trikanand and Amber Sinha; September 13, 2019).&amp;nbsp;&lt;em&gt;Research by Shruti Trikanad and Amber Sinha. Conceptualization by Pooja Saxena and Amber Sinha. Illustrations by Akash Sheshadri and Pooja Saxena&lt;/em&gt;.&lt;/li&gt;&lt;/ul&gt;
&lt;h3&gt;Artificial Intelligence&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. However, interest in AI has been rekindled over the recent years due to advancements of technology and its applications to real-world scenarios. We conduct research on the existing legal and regulatory parameters:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify;"&gt;
&lt;div id="_mcePaste"&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/ai-in-healthcare"&gt;AI in Healthcare&lt;/a&gt; (Organized by Center for Information Technology and Public Policy and International Institute of Information Technology; Bangalore).&amp;nbsp;Radhika Radhakrishnan gave a talk.&lt;/div&gt;
&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/responsible-ai-workshop"&gt;Responsible AI Workshop&lt;/a&gt; (Organized by Facebook; September 17, 2019; New Delhi). Sunil Abraham participated in the meeting.&lt;/li&gt;
&lt;li style="text-align: justify;"&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today"&gt;Constitutionalizing Artificial Intelligence&lt;/a&gt; (Organized by &lt;span style="text-align: justify;"&gt;Constitutional Law Society; National University of Juridical Sciences; Kolkata). Arindrajit Basu delivered a lecture.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;a style="text-align: justify;" class="external-link" href="https://cis-india.org/raw"&gt;Researchers@Work&lt;/a&gt;&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;The researchers@work programme at CIS produces and supports pioneering and sustained trans-disciplinary research on key thematics at the intersections of internet and society; organise and incubate networks of and fora for researchers and practitioners studying and making internet in India; and contribute to development of critical digital pedagogy, research methodology, and creative practice.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Announcement&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/digital-domestic-work-india-announcement"&gt;Digital mediation of domestic and care work in India: Project Announcement&lt;/a&gt; (Ambika Tandon and Aayush Rathi; October 1, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Essays on #List — Selected Abstracts&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify;"&gt;In response to a recent call for essays that social, economic, cultural, political, infrastructural, or aesthetic dimensions of the #List, we received 11 abstracts. Out of these, &lt;a class="external-link" href="https://cis-india.org/raw/essays-on-list-selected-abstracts"&gt;we have selected 4 pieces to be published&lt;/a&gt; as part of a series titled #List on the r@w blog.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Blog Entries&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://medium.com/rawblog/hookingup-bbd0f06a8851"&gt;#HookingUp&lt;/a&gt; (Akhil Kang, Christina Thomas Dhanraj, Dhrubo Jyoti, and Gowthaman Ranganathan; August 1, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/dtil-2019-call"&gt;Call for Contributions and Reflections: Your experiences in Decolonizing the Internet’s Languages!&lt;/a&gt; (P.P. Sneha; August 7, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/simiran-lalvani-worker-kinship-food-delivery-mumbai"&gt;Simiran Lalvani - Workers’ fictive kinship relations in Mumbai app-based food delivery&lt;/a&gt; (Sumandro Chattapadhyay; August 16, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;h2&gt;&lt;a class="external-link" href="https://cis-india.org/telecom"&gt;Telecom&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify;"&gt;The growth in telecommunications in India has been impressive. While the potential for growth and returns exist, a range of issues need to be addressed for this potential to be realized. One aspect is more extensive rural coverage and the second aspect is a countrywide access to broadband which is low at about eight million subscriptions. Both require effective and efficient use of networks and resources, including spectrum.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Monthly Blog&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/blog/business-standard-september-4-2019-shyam-ponappa-traffic-rules-mindset-and-on-time-payments"&gt;Traffic Rules, Mindset and On-Time Payments&lt;/a&gt; (Shyam Ponappa; September 4, 2019).&lt;/li&gt;&lt;/ul&gt;
&lt;hr /&gt;
&lt;h3&gt;&lt;a class="external-link" href="http://cis-india.org/"&gt;About CIS&lt;/a&gt;&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;CIS is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with disabilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, digital privacy, and cyber-security. The academic research at CIS seeks to understand the reconfigurations of social and cultural processes and structures as mediated through the internet and digital media technologies.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;strong&gt;Follow CIS on:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Twitter:&lt;a href="http://twitter.com/cis_india"&gt; http://twitter.com/cis_india&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Access to Knowledge:&amp;nbsp;&lt;a href="https://twitter.com/CISA2K"&gt;https://twitter.com/CISA2K&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Information Policy:&amp;nbsp;&lt;a href="https://twitter.com/CIS_InfoPolicy"&gt;https://twitter.com/CIS_InfoPolicy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Facebook - Access to Knowledge:&lt;a href="https://www.facebook.com/cisa2k"&gt; https://www.facebook.com/cisa2k&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;E-Mail - Access to Knowledge:&amp;nbsp;a2k@cis-india.org&lt;/li&gt;
&lt;li&gt;E-Mail - Researchers at Work:&amp;nbsp;raw@cis-india.org&lt;/li&gt;
&lt;li&gt;List - Researchers at Work:&amp;nbsp;&lt;a href="https://lists.ghserv.net/mailman/listinfo/researchers"&gt;https://lists.ghserv.net/mailman/listinfo/researchers&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Support CIS:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Please help us defend consumer and citizen rights on the Internet! Write a cheque in favour of 'The Centre for Internet and Society' and mail it to us at No. 194, 2nd 'C' Cross, Domlur, 2nd Stage, Bengaluru - 5600 71.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Collaborate with CIS:&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We invite researchers, practitioners, artists, and theoreticians, both organisationally and as individuals, to engage with us on topics related internet and society, and improve our collective understanding of this field. To discuss such possibilities, please write to Sunil Abraham, Executive Director, at&amp;nbsp;sunil@cis-india.org&amp;nbsp;(for policy research), or Sumandro Chattapadhyay, Research Director, at&amp;nbsp;sumandro@cis-india.org&amp;nbsp;(for academic research), with an indication of the form and the content of the collaboration you might be interested in. To discuss collaborations on Indic language Wikipedia projects, write to Tanveer Hasan, Programme Officer, at&amp;nbsp;tanveer@cis-india.org.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;em&gt;CIS is grateful to its primary donor the Kusuma Trust founded by Anurag Dikshit and Soma Pujari, philanthropists of Indian origin for its core funding and support for most of its projects. CIS is also grateful to its other donors, Wikimedia Foundation, Ford Foundation, Privacy International, UK, Hans Foundation, MacArthur Foundation, and IDRC for funding its various projects&lt;/em&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/about/newsletters/september-2019-newsletter'&gt;https://cis-india.org/about/newsletters/september-2019-newsletter&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Access to Knowledge</dc:subject>
    

   <dc:date>2019-12-06T04:53:12Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study">
    <title>Millions of kids in India access the Net on their parents’ devices, says study</title>
    <link>https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study</link>
    <description>
        &lt;b&gt;Experts raise concern over exposing kids to predators, phishing and bullying.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Varun Aggarwal published in &lt;a class="external-link" href="https://www.thehindubusinessline.com/news/variety/millions-of-kids-in-india-access-the-neton-their-parents-devices-says-study/article29530768.ece"&gt;Hindu Businessline&lt;/a&gt; quotes Sunil Abraham.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;Youtube’s recent fine of $170 million in the US for illegally collecting personal information of children without parental consent should ring alarm bells back in India. Similar violations may be going unnoticed here as millions of kids use Internet on their parents’ devices.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A new study conducted by the Internet and Mobile Association of India (IAMAI) states 66 million Internet users in the country are in the age bracket of 5 to 11 years and they are viewing it on the devices of family members.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In today's age when adults are finding it hard to understand the extent of physical, mental, financial risk they are exposing themselves to, kids need special treatment as they are more vulnerable and not capable of making decisions for themselves.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This could be exposing young children to predators, bullying, phishing, or even malware attacks, experts feel.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“India does not have clear laws equivalent to COPPA (Children's Online Privacy Protection Act) but adhoc executive rulings, court cases and discussions. We don't have formal ways of ensuring responsible behaviour,” said Mishi Choudhary, technology lawyer and online civil liberties activist.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Social networks Tiktok was recently pulled up by the Indian government for allowing ‘inappropriate content’ being available on the platform. A &lt;em&gt;BusinessLine&lt;/em&gt; investigation later revealed that Tiktok was not alone. Many other social media platforms had similar, if not more inappropriate, content easily accessible without any restrictions or age verification.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Moreover, no serious efforts have been taken by either the government or the social media platforms to ensure that kids are not exposed to ‘inappropriate content’ or if they are collecting any private information about the kids.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Several apps have children-directed content, targeted ads are regularly served on these platforms to kids younger than 13 years of age. There needs to be clear requirement for verifiable parental consent before collecting personal information and clear information about parental control,” Choudhary said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Parents and kids are equally required to be reminded that online actions have consequences.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Internet today is a dangerous place for children. Parents should ensure that all access is supervised till the child in well into their teens and demonstrate safe practices online,” said Sunil Abraham, Executive Director at the Centre for Internet and Society.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study'&gt;https://cis-india.org/internet-governance/news/hindu-business-line-varun-aggarwal-september-27-2019-millions-of-kids-in-india-access-the-net-on-their-parents-devices-says-study&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Varun Aggarwal</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-09-28T10:01:04Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations">
    <title>Examining the Constitutionality of the Ban on Broadcast of News by Private FM and Community Radio Stations</title>
    <link>https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations</link>
    <description>
        &lt;b&gt;Gurshabad Grover and Torsha Sarkar along with Rajashri Seal and Neil Trivedi co-authored a paper that examined the constitutionality of the government prohibition on the broadcast of news against private and community FM channels.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In the article, the authors also mapped chronologically the history of           the development of community and private radio channels in           India. As part of the legal analysis, the authors examined the           prohibition on the touchstones of existing Indian           jurisprudence on media freedom and speech rights. Finally, they also utilized some key points made by the Additional Solicitor           General in the Shreya Singhal case, to propose an alternative           regulatory framework that would address both the interests of           the radio channels and the government.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In 1995, the Supreme Court declared airwaves to be public property in the seminal case of The Secretary, Ministry of Information and Broadcasting v Cricket Association of Bengal, and created the stepping stones for liberalization of broadcasting media from government monopoly. Despite this, community radio and private FM channels, in their nearly two decades of existence, have been unable to broadcast their own news content because of the Government’s persisting prohibition on the same.In this paper, we document the historical developments surrounding the issue, and analyse the constitutional validity of this prohibition on the touchstone of the existing jurisprudence on free speech and media freedom. Additionally, we also propose an alternative regulatory framework which would assuage the government’s apprehensions regarding radicalisation through radio spaces, as well as ensure that the autonomy of these stations is not curtailed.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Click to download the full paper by NLUD Journal of           Legal Studies &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ban-of-news-on-radio.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations'&gt;https://cis-india.org/internet-governance/blog/nlud-journal-of-legal-studies-september-27-2019-gurshabad-grover-torsha-sarkar-rajashri-seal-neil-trivedi-examining-the-constitutionality-of-ban-on-broadcast-of-news-by-private-fm-and-community-radio-stations&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Gurshabad Grover, Torsha Sarkar, Rajashri Seal and Neil Trivedi</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-09-27T16:36:46Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities">
    <title>Capturing Gender and Class Inequities: The CCTVisation of Delhi</title>
    <link>https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities</link>
    <description>
        &lt;b&gt;Ambika Tandon and Aayush Rathi generated empirical evidence about the CCTV programme well underway in Delhi. The case study was published by Centre for Development Informatics, Global Development Institute, SEED, in the Development Informatics working paper series housed at the University of Manchester. &lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;Abstract&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Cityscapes across the global South, following historical trends in the North, are increasingly being littered by closed-circuit television (CCTV) cameras. In this paper, we study the wholesale implementation of CCTV in New Delhi, a city notorious for incredibly high rates of crime against women. The push for CCTV, then, became one of many approaches explored by the state in making the city safer for women.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this paper, we deconstruct this narrative of greater surveillance equating to greater safety by using empirical evidence to understand the subjective experience of surveilling and being surveilled. By focussing on gender and utilising work from feminist thought, we find that the experience of surveillance is intersectionally mediated along the axes of class and gender.The gaze of CCTV is cast upon those already marginalised to arrive at normative encumbrances placed by private, neoliberal interests on the urban public space. The politicisation of CCTV has happened in this context, and continues unabated in the absence of any concerted policy apparatus regulating it. We frame our findings utilising an analytical data justice framework put forth by Heeks and Shekhar (2019). This comprehensively sets out a social justice agenda that situates CCTV within the socio-political contexts that are intertwined in the development and implementation of the technology itself.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Click to download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/development-informatics"&gt;full research paper&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities'&gt;https://cis-india.org/internet-governance/blog/development-informatics-paper-number-81-aayush-rathi-and-ambika-tandon-capturing-gender-and-class-inequities&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Aayush Rathi and Ambika Tandon</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-09-27T15:24:10Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet">
    <title>Turning off the internet: Chips with Everything podcast</title>
    <link>https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet</link>
    <description>
        &lt;b&gt;Gurshabad Grover and Ambika Tandon recorded an episode with the Guardian's podcast on digital culture, called Chips with Everything.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The episode     was on internet shutdowns in India, and can be found &lt;a class="external-link" href="https://www.theguardian.com/technology/audio/2019/sep/02/turning-off-the-internet-chips-with-everything-podcast"&gt;here&lt;/a&gt;. &lt;span&gt;Ambika spoke about a book CIS published in collaboration with 101 Reporters last year on personal narratives of experiencing shutdowns, which can be &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/internet-shutdown-stories"&gt;found here&lt;/a&gt;. Gurshabad talked about the legal grounds through which shutdowns are imposed, possible routes of countering them, and the status of shutdowns in international law.&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet'&gt;https://cis-india.org/internet-governance/news/guardian-september-3-2019-turning-off-the-internet&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-09-26T02:09:18Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/participation-in-the-meeting-of-litd-17-at-bis">
    <title>Participation in the meeting of LITD 17 at BIS</title>
    <link>https://cis-india.org/internet-governance/news/participation-in-the-meeting-of-litd-17-at-bis</link>
    <description>
        &lt;b&gt;On September 25, 2019, Gurshabad Grover along with Elonnai Hickok and Karan Saini attended the meeting of the Information Systems Security &amp; Privacy Sectional Committee (LITD17) of the Bureau of Indian Standards (BIS).&lt;/b&gt;
        &lt;p&gt;Some agenda points:&lt;/p&gt;
&lt;div id="_mcePaste" style="text-align: justify; "&gt;
&lt;ul&gt;
&lt;li&gt;Elonnai, Karan and Gurshabad had submitted comments on two standards related to infomration security of biometrics systems: (i) ISO/IEC 24745: 2011 &lt;span&gt;Information Technology – Security techniques – Biometric information protection; (ii) Doc. No. LITD 17 (3595) ISO/IEC 19792: 2009 Information &lt;/span&gt;&lt;span&gt;Technology – Security techniques – Security evaluation of biometrics. Gurshabad Grover is now serving in a panel with BIS and MeitY representatives to discuss &lt;/span&gt;&lt;span&gt;how the standards compare to UIDAI's standards and governing regulations.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Gurshabad &lt;/span&gt;updated the committee with my plan of participation at the ISO/IEC JTC 1 SC 27 meetings (which were held earlier this month in Paris).&lt;/li&gt;
&lt;li&gt;Gurshabad will be joining a panel to discuss and further develop a draft mobile phone security standard for India.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/participation-in-the-meeting-of-litd-17-at-bis'&gt;https://cis-india.org/internet-governance/news/participation-in-the-meeting-of-litd-17-at-bis&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    

   <dc:date>2019-11-02T06:30:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/policy-design-jam">
    <title>Policy Design Jam</title>
    <link>https://cis-india.org/internet-governance/news/policy-design-jam</link>
    <description>
        &lt;b&gt;Pallavi Bedi, Akash Sheshadri and Anubha Sinha attended the event organized by Whatsapp and ISPP on 16 September 2019 at Indian School of Public Policy campus, Qutub Institutional Area, Delhi.&lt;/b&gt;
        &lt;h3&gt;Session Schedule&lt;/h3&gt;
&lt;p&gt; &lt;/p&gt;
&lt;div id="_mcePaste"&gt;2 00 pm - 3 00 pm - Registration&lt;/div&gt;
&lt;div id="_mcePaste"&gt;3 05 pm - 4 00 pm - Experiential design exercises&lt;/div&gt;
&lt;div id="_mcePaste"&gt;4 00 pm - 4 15 pm - Break&lt;/div&gt;
&lt;div id="_mcePaste"&gt;4 15 pm - 5 00 pm - Design Thinking for Policy Insights from Global Design Jams&lt;/div&gt;
&lt;div id="_mcePaste"&gt;5 00 pm - 5 20 pm - Q &amp;amp; A&lt;/div&gt;
&lt;div id="_mcePaste"&gt;5 20 pm - 6 00 pm - High tea&lt;/div&gt;
&lt;p&gt;2 00 pm - 3 00 pm - Registration3 05 pm - 4 00 pm - Experiential design exercises&lt;br /&gt;&lt;span&gt;4 00 pm - 4 15 pm - Break&lt;br /&gt;4 15 pm - 5 00 pm - Design Thinking for Policy Insights from Global Design Jams&lt;br /&gt;5 00 pm - 5 20 pm - Q &amp;amp; A&lt;br /&gt;5 20 pm - 6 00 pm - High tea&lt;/span&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/policy-design-jam'&gt;https://cis-india.org/internet-governance/news/policy-design-jam&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-09-25T14:30:33Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network">
    <title>CIS joins the Christchurch Call Advisory Network</title>
    <link>https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network</link>
    <description>
        &lt;b&gt;Centre for Internet &amp; Society's  application for membership of the Christchurch Call Advisory Network has been accepted! As a part of this network, we, along with other civil society groups based out of various jurisdictions, would be providing inputs on making the Call a robust, human rights-centred initiative. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The Christchurch Call Advisory Network membership has been drawn from interested civil society groups, who represent a range of perspectives, including human rights, freedom of expression, digital rights, counter-radicalization, victim support and public policy. Many of the Advisory Network members have been engaged on the Christchurch Call since its launch and are committed to continuing to share their expertise.&lt;/p&gt;
&lt;h3&gt;The Christchurch Call Advisory Network&lt;/h3&gt;
&lt;ul class="org-list"&gt;
&lt;li&gt;Access Now&lt;/li&gt;
&lt;li&gt;Africa Digital Policy Project&lt;/li&gt;
&lt;li&gt;Annenberg Public Policy Center, University of Pennsylvania&lt;/li&gt;
&lt;li&gt;Article 19&lt;/li&gt;
&lt;li&gt;Association for Progressive Communications&lt;/li&gt;
&lt;li&gt;Brookings Institution&lt;/li&gt;
&lt;li&gt;Center for Humane Technology&lt;/li&gt;
&lt;li&gt;Centre for Internet and Society, India&lt;/li&gt;
&lt;li&gt;Chicago Project on Security and Threats, University of Chicago&lt;/li&gt;
&lt;li&gt;Committee to Protect Journalists&lt;/li&gt;
&lt;li&gt;Council on American-Islamic Relations (CAIR)&lt;/li&gt;
&lt;li&gt;Dangerous Speech Project&lt;/li&gt;
&lt;li&gt;Data &amp;amp; Society&lt;/li&gt;
&lt;li&gt;Electronic Frontier Foundation&lt;/li&gt;
&lt;li&gt;French National Bar Council&lt;/li&gt;
&lt;li&gt;Global Disinformation Index&lt;/li&gt;
&lt;li&gt;Global Forum for Media Development (GFMD)&lt;/li&gt;
&lt;li&gt;Global Partners Digital&lt;/li&gt;
&lt;li&gt;Global Network Initiative&lt;/li&gt;
&lt;li&gt;Hedayah Center&lt;/li&gt;
&lt;li&gt;Human Rights Centre, UC, Berkeley School of Law&lt;/li&gt;
&lt;li&gt;ICT for Peace Foundation&lt;/li&gt;
&lt;li&gt;Institute for Strategic Dialogue&lt;/li&gt;
&lt;li&gt;International Cyber Policy Centre (Australian Strategic Policy Institute)&lt;/li&gt;
&lt;li&gt;Internet Governance Project, Georgia Tech&lt;/li&gt;
&lt;li&gt;Internet NZ&lt;/li&gt;
&lt;li&gt;Internet Sans Frontières&lt;/li&gt;
&lt;li&gt;Islamic Women's Council of New Zealand&lt;/li&gt;
&lt;li&gt;Life After Hate&lt;/li&gt;
&lt;li&gt;Netsafe&lt;/li&gt;
&lt;li&gt;New America's Open Technology Institute (New America Foundation)&lt;/li&gt;
&lt;li&gt;NZ Council for Civil Liberties&lt;/li&gt;
&lt;li&gt;Reporters Without Borders (RSF)&lt;/li&gt;
&lt;li&gt;Social Media Governance Initiative, Yale Law School&lt;/li&gt;
&lt;li&gt;Syrian Archive&lt;/li&gt;
&lt;li&gt;Tech Against Terrorism&lt;/li&gt;
&lt;li&gt;The International Muslim Association of New Zealand&lt;/li&gt;
&lt;li&gt;The Internet Society&lt;/li&gt;
&lt;li&gt;Tony Blair Institute for Global Change&lt;/li&gt;
&lt;li&gt;Wellington Abrahamic Council of Jews, Christians, and Muslims (NZ)&lt;/li&gt;
&lt;li&gt;WITNESS&lt;/li&gt;
&lt;li&gt;Women’s Organisation of the Waikato Muslim Association&lt;/li&gt;
&lt;li&gt;&lt;small&gt;Elina Noor (Visiting Fellow, Institute of Strategic and International Studies Malaysia)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;small&gt;Matthew Shears (Internet and telecommunications policy consultant)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network'&gt;https://cis-india.org/internet-governance/news/cis-joins-the-christchurch-call-advisory-network&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-09-25T13:57:49Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
