<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 11 to 25.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/appropriate-use-of-digital-identity-alliance-announcement"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/social-media-monitoring"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/privacy-after-big-data-workshop-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/deep-packet-inspection-how-it-works-and-its-impact-on-privacy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/new-media-personalisation-and-the-role-of-algorithms"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/epw-amber-sinha-may-18-2018-for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/comments-on-information-technology-security-of-prepaid-payment-instruments-rules-2017"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/the200b-200bfundamental200b-200bright200b-200bto200b-200bprivacy-200b-200bpart200b-200biii-scope"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/counter-comments-on-trais-consultation-paper-on-privacy-security-and-ownership-of-data-in-telecom-sector"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/asian-age-amber-sinha-december-3-2017-"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/aadhaar-bill-fails-to-incorporate-suggestions-by-the-standing-committee"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-statement-on-right-to-privacy-judgment"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/appropriate-use-of-digital-identity-alliance-announcement">
    <title>Announcement of a Three-Region Research Alliance on the Appropriate Use of Digital Identity</title>
    <link>https://cis-india.org/internet-governance/blog/appropriate-use-of-digital-identity-alliance-announcement</link>
    <description>
        &lt;b&gt;Omidyar Network has recently announced its decision to invest in establishment of a three-region research alliance — to be co-led by the Institute for Technology &amp; Society (ITS), Brazil, the Centre for Intellectual Property and Information Technology Law (CIPIT) , Kenya, and the CIS, India — on the Appropriate Use of Digital Identity. As part of this Alliance, we at the CIS will look at the policy objectives of digital identity projects, how technological policy choices can be thought through to meet the objectives, and how legitimate uses of a digital identity framework may be evaluated.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;As governments across the globe are implementing new, digital foundational identification systems or modernizing existing ID programs, there is a dire need for greater research and discussion about appropriate design choices for a digital identity framework. There is significant momentum on digital ID, especially after the adoption of UN Sustainable Development Goal 16.9, which calls for legal identity for all by 2030. Given the importance of this subject, its implications for both the development agenda as well its impact on civil, social and economic rights, there is a need for more focused research that can enable policymakers to take better decisions, guide civil society in different jurisdictions to comment on and raise questions about digital identity schemes, and provide actionable material to the industry to create identity solutions that are privacy enhancing and inclusive.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Excerpt from the &lt;a href="https://www.omidyar.com/blog/appropriate-use-digital-identity-why-we-invested-three-region-research%C2%A0alliance" target="_blank"&gt;blog post by Subhashish Bhadra&lt;/a&gt; announcing this new research alliance&lt;/h4&gt;
&lt;p&gt;...In the absence of any widely-accepted thinking on this issue, we run the risk of digital identity systems suffering from mission creep, that is being made mandatory or being used for an ever-expanding set of services. We believe this creates several risks. First, people may be excluded from services if they do not have a digital identity or because it malfunctions. Second, this approach creates a wider digital footprint that can be used to create a profile of an individual, sometimes without consent. This can increase privacy risk. Third, this approach increases the power of institutions versus individuals and can be used as rationale to intentionally deny services, especially to vulnerable or persecuted groups.&lt;/p&gt;
&lt;p&gt;Three exceptional research groups have undertaken the effort of answering this complex and important question. Over the next six months, these think tanks will conduct independent research, as well as involve experts from across the globe. Based in South America, Africa, and Asia, these institutions represent the collective wisdom and experiences of three very distinct geographies in emerging markets. While drawing on their local context, this research effort is globally oriented. The think tanks will create a set of recommendations and tools that can be used by stakeholders to engage with digital identity systems in any part of the world...&lt;/p&gt;
&lt;p&gt;This research will use a collaborative and iterative process. The researchers will put out some ideas every few weeks, with the objective of seeking thoughts, questions, and feedback from various stakeholders. They will participate in several digital rights and identity events across the globe over the next several months. They will also organize webinars to seek input from and present their interim findings to interested communities from across the globe. Each of these provide an opportunity for you to provide your thoughts and help this research program provide an independent, rigorous, transparent, and holistic answer to the question of when it’s appropriate for digital identity to be used. We need a diversity of viewpoints and collaborative dissent to help solve the most pressing issues of our times.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/appropriate-use-of-digital-identity-alliance-announcement'&gt;https://cis-india.org/internet-governance/blog/appropriate-use-of-digital-identity-alliance-announcement&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Digital ID</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Appropriate Use of Digital ID</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Digital Identity</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    

   <dc:date>2019-05-13T09:06:23Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017">
    <title>Rankathon on Digital Rights (Delhi, January 08)</title>
    <link>https://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017</link>
    <description>
        &lt;b&gt;Please join us on Sunday, January 08, at the CIS office in Hauz Khas, Delhi, for a rankathon to visualise, and contribute to the findings of the Ranking Digital Rights study, and critique the underlying methodology. The event will begin at 10:00 in the morning and participants can focus on one or more of three kinds of tasks: 1) visualising the CIS and Ranking Digital Rights data, 2) evaluating additional companies using the RDR methodology, and 3) evaluating the RDR methodology and its suitability for independent use.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Download: &lt;a href="https://github.com/cis-india/website/raw/master/docs/CIS_RDRIndia-Rankathon_08012017_Invitation.pdf"&gt;Invitation&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The &lt;a href="https://rankingdigitalrights.org/"&gt;Ranking Digital Rights Corporate Responsibility Index&lt;/a&gt; is a project hosted by the Open Technology Institute at New America Foundation that aims to rank Information and Communications Technology (ICTs) companies with respect to their Governance, Freedom of Expression, and Privacy practices. The inaugural Corporate Accountability Index, released in November 2015, evaluated 16 companies based on the project’s methodology that included 31 indicators in total.&lt;/p&gt;
&lt;p&gt;Towards developing an understanding of how Indian ICT companies are recognising and upholding digital rights of their users, and to raise public awareness about the same, the Center for Internet and Society (CIS), with the support of &lt;a href="https://privacyinternational.org/"&gt;Privacy International&lt;/a&gt;, has studied 8 Indian ICT companies, using the same methodology as the 2015 Corporate Accountability Index, to gain greater insight into company practices and initiate public dialogues.&lt;/p&gt;
&lt;p&gt;Please join us on Sunday, January 08, at the CIS office in Hauz Khas, Delhi, for a rankathon to visualise, and contribute to the findings of the Ranking Digital Rights study, and critique the underlying methodology. The event will begin at 10:00 in the morning and participants can focus on one or more of three kinds of tasks:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;
&lt;p&gt;visualising the CIS and Ranking Digital Rights data,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;evaluating additional companies using the RDR methodology, and&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;evaluating the RDR methodology and its suitability for independent use.&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The event is open to all but the venue has limited space. The participants are requested to RSVP by sending an email to &lt;a href="mailto:nisha@cis-india.org?subject=RSVP: Rankathon on Digital Rights"&gt;nisha@cis-india.org&lt;/a&gt;. The final date for registering for the event is &lt;strong&gt;January 04&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;All visualisations and other outputs produced at the event will be published under open licenses. All participants are expected to bring their own laptop or any other items needed for their work. CIS will offer data, help with understanding how the Ranking Digital Rights methodology work, refreshments, and any other support as needed.&lt;/p&gt;
&lt;p&gt;We are also organising a discussion event on Saturday, January 07, at the India Islamic Cultural Centre, Delhi, to present our findings on digital rights practices of 8 Indian ICT companies, followed by an open structured discussion on the methodology of the Ranking Digital Rights study. Please find more details about this &lt;a href="http://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We look forward to your participation and contribution to the discussion. Please support us by sharing this invitation with your colleagues and networks.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017'&gt;https://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Digital Rights</dc:subject>
    

   <dc:date>2016-12-29T07:10:09Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/social-media-monitoring">
    <title>Social Media Monitoring</title>
    <link>https://cis-india.org/internet-governance/blog/social-media-monitoring</link>
    <description>
        &lt;b&gt;We see a trend of social media and communication monitoring and surveillance initiatives in India which have the potential to create a chilling effect on free speech online and raises question about the privacy of individuals. In this paper, Amber Sinha looks at social media monitoring as a tool for surveillance, the current state of social media surveillance in India, and evaluate how the existing regulatory framework in India may deal with such practices in future.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Social Media Monitoring: &lt;a href="http://cis-india.org/internet-governance/files/social-media-monitoring/at_download/file"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;h3&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;In 2014, the Government of India launched the much lauded and popular citizen outreach website called MyGov.in. A press release by the government announced that they had roped in global consulting firm PwC to assist in the data mining exercise to process and filter key points emerging from debates on Mygov.in. While this was a welcome move, the release also mentioned that the government intended to monitor social media sites in order to gauge popular opinion. Further, earlier this year, the government set up National Media Analytics Centre (NMAC) to monitor blogs, media channels, news outlets and social media platforms. The tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments, and also look at the past patterns of posts. A project called NETRA has already been reported in the media a few years back which would intercept and analyse internet traffic using pre-defined filters. Alongside, we see other initiatives which intend to use social media data for predictive policing purposes such as CCTNS and Social Media Labs.&lt;/p&gt;
&lt;p&gt;Thus, we see a trend of social media and communication monitoring and surveillance initiatives announced by the government which have the potential to create a chilling effect on free speech online and raises question about the
privacy of individuals. Various commentators have raised concerns about the legal validity of such programmes and whether they were in violation of the fundamental rights to privacy and free expression, and the existing surveillance laws in India. The lack of legislation governing these programmes often translates into an absence of transparency and due procedure. Further, a lot of personal communication now exists in the public domain which
renders traditional principles which govern interception and monitoring of personal communications futile. In the last few years, the blogosphere and social media websites in India have also changed and become platforms for more dissemination of political content, often also accompanied by significant vitriol, ‘trolling’ and abuse. Thus, we see greater policing of public or semi-public spaces online. In this paper, we look at social media monitoring as a
tool for surveillance, the current state of social media surveillance in India and evaluate how the existing regulatory framework in India may deal with such practices in future.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/social-media-monitoring'&gt;https://cis-india.org/internet-governance/blog/social-media-monitoring&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Surveillance</dc:subject>
    

   <dc:date>2017-01-16T14:23:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/privacy-after-big-data-workshop-report">
    <title>Privacy after Big Data - Workshop Report</title>
    <link>https://cis-india.org/internet-governance/blog/privacy-after-big-data-workshop-report</link>
    <description>
        &lt;b&gt;The Centre for Internet and Society (CIS) and the Sarai programme, CSDS, organised a workshop on 'Privacy after Big Data: What Changes? What should Change?' on Saturday, November 12, 2016 at Centre for the Study of Developing Societies in New Delhi. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;This workshop aimed to build a dialogue around some of the key government-led big data initiatives in India and elsewhere that are contributing significant new challenges and concerns to the ongoing debates on the right to privacy. It was an open event.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this age of big data, discussions about privacy are intertwined with the use of technology and the data deluge. Though big data possesses enormous value for driving innovation and contributing to productivity and efficiency, privacy concerns have gained significance in the dialogue around regulated use of data and the means by which individual privacy might be compromised through means such as surveillance, or protected. The tremendous opportunities big data creates in varied sectors ranges from financial technology, governance, education, health, welfare schemes, smart cities to name a few. With the UID project re-animating the Right to Privacy debate in India, and the financial technology ecosystem growing rapidly, striking a balance between benefits of big data and privacy concerns is a critical policy question that demands public dialogue and research to inform an evidence based decision. Also, with the advent of potential big data initiatives like the ambitious Smart Cities Mission under the Digital India Scheme, which would rely on harvesting large data sets and the use of analytics in city subsystems to make public utilities and services efficient, the tasks of ensuring data security on one hand and protecting individual privacy on the other become harder.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This workshop sought to discuss some of the emerging problems due to the advent of big data and possible ways to address these problems. The workshop began with Amber Sinha of CIS and Sandeep Mertia of Sarai introducing the topic of big data and implications for privacy. Both speakers tried to define big data and brief history of the evolution of the term and raised questions about how we understand it. Dr. Usha Ramanathan spoke on the right to privacy in the context of the ongoing Aadhaar case and Vipul Kharbanda introduced the concept of Habeas Data as a possible solution to the privacy problems posed by big data.  Amelia Andersotter discussed national centralised digital ID systems and their evolution in Europe, often operating at a cross-functional scale, and highlighted its implications for discussions on data protection, welfare governance, and exclusion from public and private services. Srikanth Lakshmanan spoke of the issues with technology and privacy, and possible technological solutions.  Dr. Anupam Saraph discussed the rise of digital banking and Aadhaar based payments and its potential use for corrupt practices. Astha Kapoor of Microsave spoke about her experience of implementation of digital money solution in rural India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Post lunch, Dr. Anja Kovacs and Mathew Rice spoke on the rise of mass communication surveillance across the world, and the evolving challenges of regulating surveillance by government agencies. Mathew also spoke of privacy movements by citizens and civil society in regions. In the final speaking session, Apar Gupta and Kritika Bhardwaj traced the history of jurisprudence on the right to privacy and the existing regulations and procedures. In the final session, the participants discussed various possible solutions to privacy threats from big data and identity projects including better regulation, new approached such as harms based regulation and privacy risk assessments, and conceiving privacy as a horizontal right. The workshop ended with vote of thanks from the organizers.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The agenda for the event can be accessed &lt;a href="https://github.com/cis-india/website/raw/master/docs/CIS-Sarai_PrivacyAfterBigData_ConceptAgenda.pdf"&gt;here&lt;/a&gt;, and the transcript is available &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/privacy-after-big-data/"&gt;here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/privacy-after-big-data-workshop-report'&gt;https://cis-india.org/internet-governance/blog/privacy-after-big-data-workshop-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-01-27T01:09:17Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/deep-packet-inspection-how-it-works-and-its-impact-on-privacy">
    <title>Deep Packet Inspection: How it Works and its Impact on Privacy</title>
    <link>https://cis-india.org/internet-governance/blog/deep-packet-inspection-how-it-works-and-its-impact-on-privacy</link>
    <description>
        &lt;b&gt; In the last few years, there has been extensive debate and discussion around network neutrality in India. The online campaign in favor of Network Neutrality was led by Savetheinternet.in in India. The campaign was a spectacular success and facilitated sending  over a million emails supporting the cause of network neutrality, eventually leading to ban on differential pricing. Following in the footsteps of the Shreya Singhal judgement, the fact that the issue of net neutrality has managed to attract wide public attention is an encouraging sign for a free and open Internet in India. Since the debate has been focused largely on zero rating, other kinds of network practices impacting network neutrality have yet to be comprehensively explored in the Indian context, nor their impact on other values. In this article, the author focuses on network management, in general, and deep packet inspection, in particular and how it impacts the privacy of users.&lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;&lt;a name="_ek69t4linon1"&gt;&lt;/a&gt; Background&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In the last few years, there has been extensive debate and discussion around network neutrality in India. The online campaign in favor of Network Neutrality was led by Savetheinternet.in in India. The campaign, captured in detail by an article in Mint,	&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; was a spectacular success and facilitated sending over a million emails supporting 	the cause of network neutrality, eventually leading to ban on differential pricing. Following in the footsteps of the Shreya Singhal judgement, the fact 	that the issue of net neutrality has managed to attract wide public attention is an encouraging sign for a free and open Internet in India. Since the 	debate has been focused largely on zero rating, other kinds of network practices impacting network neutrality have yet to be comprehensively explored in 	the Indian context, nor their impact on other values. In this article, I focus on network management, in general, and deep packet inspection, in particular 	and how it impacts the privacy of users.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;a name="_ft3wpj7p1jf1"&gt;&lt;/a&gt; The Architecture of the Internet&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Internet exists as a network acting as an intermediary between providers of content and it users.	&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Traditionally, the network did not distinguish between those who provided content 	and those who were recipients of this service, in fact often, the users also functioned as content providers. The architectural design of the Internet 	mandated that all content be broken down into data packets which were transmitted through nodes in the network transparently from the source machine to the 	destination machine.&lt;a href="#_ftn3" name="_ftnref3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; As discussed in detail later, as per the OSI model, the network 	consists of 7 layers. We will go into each of these layers in detail below, however is important to understand that at the base is the physical layer of 	cables and wires, while at the top is application layer which contains all the functions that people want to perform on the Internet and the content 	associated with it. The layers in the middle can be characterised as the protocol layers for the purpose of this discussion. What makes the architecture of 	the Internet remarkable is that these layers are completely independent of each other, and in most cases, indifferent to the other layers. The protocol 	layer is what impacts net neutrality. It is this layer which provides the standards for the manner in which the data must flow through the network. The 	idea was for the it to be as simple and feature free as possible such that it is only concerned with the transmission data as fast as possible ('best 	efforts principle') while innovations are pushed to the layers above or below it.&lt;a href="#_ftn4" name="_ftnref4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This aspect of the Internet's architectural design, which mandates that network features are implemented as the end points only (destination and source 	machine), i.e. at the application level, is called the 'end to end principle'.&lt;a href="#_ftn5" name="_ftnref5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This 	means that the intermediate nodes do not differentiate between the data packets in any way based on source, application or any other feature and are only concerned with transmitting data as fast as possible, thus creating what has been described as a 'dumb' or neutral network.	&lt;a href="#_ftn6" name="_ftnref6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This feature of the Internet architecture was also considered essential to what 	Jonathan Zittrain has termed as the 'generative' model of the Internet.&lt;a href="#_ftn7" name="_ftnref7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Since, the 	Internet Protocol remains a simple layer incapable of discrimination of any form, it meant that no additional criteria could be established for what kind 	of application would access the Internet. Thus, the network remained truly open and ensured that the Internet does not privilege or become the preserve of 	a class of applications, nor does it differentiate between the different kinds of technologies that comprise the physical layer below.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the above model speaks of a dumb network not differentiating between the data packets that travel through it, in truth, the network operators engage 	in various kinds of practices that priorities, throttle or discount certain kinds of data packets. In her thesis essay at the Oxford Internet Institute, 	Alissa Cooper&lt;a href="#_ftn8" name="_ftnref8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; states that traffic management involves three different set of 	criteria- a) Some subsets of traffic needs to be managed, and arriving at a criteria to identify those subsets the criteria can be based on source, 	destination, application or users, b) Trigger for the traffic management measure which - could be based upon time of the day, usage threshold or a specific 	network condition, and c) the traffic treatment put into practice when the trigger is met. The traffic treatment can be of three kinds. The first is 	Blocking, in which traffic is prevented from being delivered. The second is Prioritization under which identified traffic is sent sooner or later. This is 	usually done in cases of congestion and one kind of traffic needs to be prioritized. The third kind of treatment is Rate limiting where identified traffic 	is limited to a defined sending rate.&lt;a href="#_ftn9" name="_ftnref9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The dumb network does not interfere with an 	application's operation, nor is it sensitive to the needs of an application, and in this way it treats all information sent over it as equal. In such a 	network, the content of the packets is not examined, and Internet providers act according to the destination of the data as opposed to any other factor. 	However, in order to perform traffic management in various circumstances, Deep packet Inspection technology, which does look at the content of data packets 	is commonly used by service providers.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;a name="_r7ojhgh467u5"&gt;&lt;/a&gt; Deep Packet Inspection&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Deep packet inspection (DPI) enables the examination of the content of a data packets being sent over the Internet. Christopher Parsons explains the header 	and the payload of a data packet with respect to the OSI model. In order to understand this better, it is more useful to speak of network in terms of the 	seven layers in the OSI model as opposed to the three layers discussed above.&lt;a href="#_ftn10" name="_ftnref10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Under the OSI model, the top layer, the Application Layer is in contact with the software making a data request. For instance, if the activity in question 	is accessing a webpage, the web-browser makes a request to access a page which is then passed on to the lower layers. The next layer is the Presentation 	Layer which deals with the format in which the data is presented. This lateral performs encryption and compression of the data. In the above example, this 	would involve asking for the HTML file. Next comes the Session Layer which initiates, manages and ends communication between the sender and receiver. In 	the above example, this would involve transmitting and regulating the data of the webpage including its text, images or any other media. These three layers 	are part of the 'payload' of the data packet.&lt;a href="#_ftn11" name="_ftnref11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The next four layers are part of the 'header' of the data packet. It begins with the Transport Layer which collects data from the Payload and creates a 	connection between the point of origin and the point of receipt, and assembles the packets in the correct order. In terms of accessing a webpage, this 	involves connecting the requesting computer system with the server hosting the data, and ensuring the data packets are put together in an arrangement which 	is cohesive when they are received. The next layer is the Data Link Layer. This layer formats the data packets in such a way that that they are compatible 	with the medium being used for their transmission. The final layer is the Physical Layer which determines the actual media used for transmitting the 	packets.&lt;a href="#_ftn12" name="_ftnref12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The transmission of the data packet occurs between the client and server, and packet inspect occurs through some equipment placed between the client and 	the server. There are various ways in which packet inspection has been classified and the level of depth that the inspection needs to qualify in order to 	be categorized as Deep Packet Inspection. We rely on Parson's classification system in this article. According to him, there are three broad categories of 	packet inspection - shallow, medium and deep.&lt;a href="#_ftn13" name="_ftnref13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Shallow packet inspection involves the inspection of the only the header, and usually checking it against a blacklist. The focus in this form of inspection 	is on the source and destination (IP address and packet;s port number). This form of inspection primarily deals with the Data Link Layer and Network Layer 	information of the packet. Shallow Packet Inspection is used by firewalls.&lt;a href="#_ftn14" name="_ftnref14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Medium Packet Inspection involves equipment existing between computers running the applications and the ISP or Internet gateways. They use application 	proxies where the header information is inspected against their loaded parse-list and used to look at a specific flows. These kinds of inspections 	technologies are used to look for specific kinds of traffic flows and take pre-defined actions upon identifying it. In this case, the header and a small 	part of the payload is also being examined.&lt;a href="#_ftn15" name="_ftnref15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, Deep Packet Inspection (DPI) enables networks to examine the origin, destination as well the content of data packets (header and payload). These 	technologies look for protocol non-compliance, spam, harmful code or any specific kinds of data that the network wants to monitor. The feature of the DPI 	technology that makes it an important subject of study is the different uses it can be put to. The use cases vary from real time analysis of the packets to 	interception, storage and analysis of contents of a packets.&lt;a href="#_ftn16" name="_ftnref16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;a name="_pi28w1745j15"&gt;&lt;/a&gt; The different purposes of DPI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Network Management and QoS&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The primary justification for DPI presented is network management, and as a means to guarantee and ensure a certain minimum level of QoS (Quality of 	Service). Quality of Service (QoS) as a value conflicting with the objectives of Network Neutrality, has emerged as a significant discussion point in this 	topic. Much like network neutrality, QoS is also a term thrown around in vague, general and non-definitive references. The factors that come into play in 	QoS are network imposed delay, jitter, bandwidth and reliability. Delay, as the name suggests, is the time taken for a packet to be passed by the sender to the receiver. Higher levels of delay are characterized by more data packets held 'in transit' in the network.	&lt;a href="#_ftn17" name="_ftnref17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; A paper by Paul Ferguson and Geoff Huston described the TCP as a 'self clocking' 	protocol.&lt;a href="#_ftn18" name="_ftnref18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This enables the transmission rate of the sender to be adjusted as per 	the rate of reception by the receiver. As the delay and consequent stress on the protocol increases, this feedback ability begins to lose its sensitivity. 	This becomes most problematic in cases of VoIP and video applications. The idea of QoS generally entails consistent service quality with low delay, low 	jitter and high reliability through a system of preferential treatment provided to some traffic on a criteria formulated around the need of such traffic to 	have greater latency sensitivity and low delay and jitter. This is where Deep Packet Inspection comes into play. In 1991, Cisco pioneered the use of a new 	kind of router that could inspect data packets flowing through the network. DPI is able to look inside the packets and its content, enabling it to classify 	packets according to a formulated policy. DPI, which was used a security tool, to begin with, is a powerful tool as it allows ISPs to limit or block 	specific applications or improve performances of applications in telephony, streaming and real-time gaming. Very few scholars believe in an all-or-nothing approach to network neutrality and QoS and debate often comes down to what forms of differentiations are reasonable for service providers to practice.	&lt;a href="#_ftn19" name="_ftnref19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Security&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Deep Packet inspection was initially intended as a measure to manage the network and protect it from transmitting malicious programs . As mentioned above, Shallow Packet Inspection was used to secure LANs and keep out certain kinds of unwanted traffic.	&lt;a href="#_ftn20" name="_ftnref20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Similarly, DPI is used for identical purposes, where it is felt useful to 	enhance security and complete a 'deeper' inspection that also examines the payload along with the header information.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Surveillance&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The third purpose of DPI is what concerns privacy theorists the most. The fact that DPI technologies enable the network operators to have access to the actual content of the data packets puts them a position of great power as well as making them susceptible to significant pressure from the state.	&lt;a href="#_ftn21" name="_ftnref21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; For instance, in US, the ISPs are required to conform to the provisions of the 	Communications Assistance for Law Enforcement Act (CALEA) which means they need to have some surveillance capacities designed into their systems. What is 	more disturbing for privacy theorists compared to the use of DPI for surveillance under legislation like CALEA, are the other alleged uses by organisation 	like the National Security Agency through back end access to the information via the ISPs. Aside from the US government, there have been various reports of use of DPI by governments in countries like China,&lt;a href="#_ftn22" name="_ftnref22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Malaysia&lt;a href="#_ftn23" name="_ftnref23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and Singapore.	&lt;a href="#_ftn24" name="_ftnref24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Behavioral targeting&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;DPI also enables very granular tracking of the online activities of Internet users. This information is invaluable for the purposes of behavioral targeting 	of content and advertising. Traditionally, this has been done through cookies and other tracking software. DPI allows new way to do this, so far exercised 	only through web-based tools to ISPs and their advertising partners. DPI will enable the ISPs to monitor contents of data packets and use this to create profiles of users which can later be employed for purposes such as targeted advertising.	&lt;a href="#_ftn25" name="_ftnref25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;a name="_gn60r7ifwcge"&gt;&lt;/a&gt; Impact on Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Each of the above use-cases has significant implications for the privacy of Internet users as the technology in question involves access, tracking or 	retention of their online communication and usage activity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Alyssa Cooper compares DPI with other technologies carrying out content inspection such as caching services and individual users employing firewalls or packet sniffers. She argues that one of the most distinguishing feature of DPI is the potential for "mission-creep."	&lt;a href="#_ftn26" name="_ftnref26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Kevin Werbach writes that while networks may deploy DPI for implementation under 	CALEA or traffic peer-to-peer shaping, once deployed DPI techniques can be used for completely different purposes such as pattern matching of intercepted 	content and storage of raw data or conclusions drawn from the data.&lt;a href="#_ftn27" name="_ftnref27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This scope of 	mission creep is even more problematic as it is completely invisible. As opposed to other technologies which rely on cookies or other web-based services, 	the inspection occurs not at the end points, but somewhere in the middle of the network, often without leaving any traces on the user's system, thus 	rendering them virtually undiscoverable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Much like other forms of surveillance, DPI threatens the sense that the web is a space where people can engage freely with a wide range of people and 	services. For such a space to continue to exist, it is important for people to feel secure about their communication and transaction on medium. This notion 	of trust is severely harmed by a sense that users are being surveilled and their communication intercepted. This has obvious chilling effect on free speech 	and could also impact electronic commerce.&lt;a href="#_ftn28" name="_ftnref28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Allyssa Cooper also points out another way in which DPI differs from other content tracking technologies. As the DPI is deployed by the ISPs, it creates a 	greater barrier to opting out and choosing another service. There are only limited options available to individuals as far as ISPs are concerned. 	Christopher Parsons does a review of ISPs using DPI technology in UK, US and Canada and offers that various ISPs do provide in their terms of services that 	they use DPI for network management purposes. However, this information is often not as easily accessible as the terms and conditions of online services. 	A;so, As opposed to online services, where it is relatively easier to migrate to another service, due to both presence of more options and the ease of 	migration, it is a much longer and more difficult process to change one's ISP.&lt;a href="#_ftn29" name="_ftnref29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;a name="_n5w8euzb4xhb"&gt;&lt;/a&gt; Measures to mitigate risk&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Currently, there are no existing regulatory frameworks in India which deal govern DPI technology in any way. The International Telecommunications Union 	(ITU) prescribes a standard for DPI&lt;a href="#_ftn30" name="_ftnref30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; however, the standard does not engage with 	any questions of privacy and requires all DPI technologies to be capable of identifying payload data, and prescribing classification rules for specific 	applications, thus, conflicting with notions of application agnosticism in network management. More importantly, the requirements to identify, decrypt and 	analyse tunneled and encrypted data threaten the reasonable expectation of privacy when sending and receiving encrypted communication. In this final 	section, I look at some possible principles and practices that may be evolved in order to mitigate privacy risks caused due to DPI technology.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Limiting 'depth' and breadth&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It has been argued that inherently what DPI technology intends to do is matching of patterns in the inspected content against a pre-defined list which is 	relevant to the purpose how which DPI is employed. Much like data minimization principles applicable to data controllers and data processors, it is 	possible for network operators to minimize the depth of the inspection (restrict it to header information only or limited payload information) so as to 	serve the purpose at hand. For instance, in cases where the ISP is looking to identify peer-to-peer traffic, there are protocols which declare their names 	in the application header itself. Similarly, a network operators looking to generate usage data about email traffic can do so simply by looking at port 	number and checking them against common email ports.&lt;a href="#_ftn31" name="_ftnref31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, this mitigation 	strategy may not work well for other use-cases such as blocking malicious software or prohibited content or monitoring for the sake of behavioral 	advertising.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While depth referred to the degree of inspection within data packets, breadth refers to the volume of packets being inspected. Alyssa Cooper argues that 	for many DPI use cases, it may be possible to rely on pattern matching on only the first few data packets in a flow, in order to arrive at sufficient data 	to take appropriate response. Cooper uses the same example about peer-to-peer traffic. In some cases, the protocol name may appear on the header file of 	only the first packet of a flow between two peers. In such circumstances, the network operators need not look beyond the header files of the first packet 	in a flow, and can apply the network management rule to the entire flow.&lt;a href="#_ftn32" name="_ftnref32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Data retention&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aside from the depth and breadth of inspection, another important question whether and for along is there a need for data retention. All use cases may not 	require any kind of data retention and even in case where DPI is used for behavioral advertising, only the conclusions drawn may be retained instead of 	retaining the payload data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Transparency&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the issues is that DPI technology is developed and deployed outside the purview of standard organizations like ISO. Hence, there has been a lack of 	open, transparent standards development process in which participants have deliberated the impact of the technology. It is important for DPI to undergo 	these process which are inclusive, in that there is participation by non-engineering stakeholders to highlight the public policy issues such as privacy. Further, aside from the technology, the practices by networks need to be more transparent.	&lt;a href="#_ftn33" name="_ftnref33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Disclosure of the presence of DPI, the level of detail being inspected or retained and the purpose for deployment of DPI can be done. Some ISPs provide some of these details in their terms of service and website notices.	&lt;a href="#_ftn34" name="_ftnref34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, as opposed to web-based services, users have limited interaction with 	their ISP. It would be useful for ISPs to enable greater engagement with their users and make their practices more transparent.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The very nature of of the DPI technology renders some aspects of recognized privacy principles like notice and consent obsolete. The current privacy frameworks under FIPP&lt;a href="#_ftn35" name="_ftnref35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and OECD	&lt;a href="#_ftn36" name="_ftnref36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; rely on the idea of empowering the individual by providing them with knowledge 	and this knowledge enables them to make informed choices. However, for this liberal conception of privacy to function meaningfully, it is necessary that 	there are real and genuine choices presented to the alternatives. While some principles like data minimisation, necessity and proportionality and purpose 	limitation can be instrumental in ensuring that DPI technology is used only for legitimate purposes, however, without effective opt-out mechanisms and 	limited capacity of individual to assess the risks, the efficacy of privacy principles may be far from satisfactory.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The ongoing Aadhaar case and a host of surveillance projects like CMS, NATGRID, NETRA&lt;a href="#_ftn37" name="_ftnref37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and NMAC	&lt;a href="#_ftn38" name="_ftnref38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; have raised concerns about the state conducting mass-surveillance, particularly 	of online content. In this regard, it is all the more important to recognise the potential of Deep Packet Inspection technologies for impact on privacy 	rights of individuals. Earlier, the Centre for Internet and Society had filed Right to Information applications with the Department of Telecommunications, Government of India regarding the use of DPI, and the government had responded that there was no direction/reference to the ISPs to employ DPI technology.	&lt;a href="#_ftn39" name="_ftnref39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Similarly, MTNL also responded to the RTI Applications and denied using the 	technology.&lt;a href="#_ftn40" name="_ftnref40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; It is notable though, that they did not respond to the questions 	about the traffic management policies they follow. Thus, so far there has been little clarity on actual usage of DPI technology by the ISPs.&lt;/p&gt;
&lt;div style="text-align: justify; "&gt;
&lt;hr /&gt;
&lt;div id="ftn1"&gt;
&lt;p&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ashish Mishra, "India's Net Neutrality Crusaders", available at 			&lt;a href="http://mintonsunday.livemint.com/news/indias-net-neutrality-crusaders/2.3.2289565628.html"&gt; http://mintonsunday.livemint.com/news/indias-net-neutrality-crusaders/2.3.2289565628.html &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn2"&gt;
&lt;p&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.livinginternet.com/i/iw_arch.htm"&gt;http://www.livinginternet.com/i/iw_arch.htm&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn3"&gt;
&lt;p&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Vinton Cerf and Robert Kahn, "A protocol for packet network intercommunication", available at 			&lt;a href="https://www.semanticscholar.org/paper/A-protocol-for-packet-network-intercommunication-Cerf-Kahn/7b2fdcdfeb5ad8a4adf688eb02ce18b2c38fed7a"&gt; https://www.semanticscholar.org/paper/A-protocol-for-packet-network-intercommunication-Cerf-Kahn/7b2fdcdfeb5ad8a4adf688eb02ce18b2c38fed7a &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn4"&gt;
&lt;p&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul Ganley and Ben Algove, "Network Neutrality-A User's Guide", available at			&lt;a href="http://wiki.commres.org/pds/NetworkNeutrality/NetNeutrality.pdf"&gt;http://wiki.commres.org/pds/NetworkNeutrality/NetNeutrality.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn5"&gt;
&lt;p&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; J H Saltzer, D D Clark and D P Reed, "End-to-End arguments in System Design", available at			&lt;a href="http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf"&gt;http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn6"&gt;
&lt;p&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra&lt;/i&gt; Note 4.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn7"&gt;
&lt;p&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Jonathan Zittrain, The future of Internet - and how to stop it, (Yale University Press and Penguin UK, 2008) available at 			&lt;a href="https://dash.harvard.edu/bitstream/handle/1/4455262/Zittrain_Future%20of%20the%20Internet.pdf?sequence=1"&gt; https://dash.harvard.edu/bitstream/handle/1/4455262/Zittrain_Future%20of%20the%20Internet.pdf?sequence=1 &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn8"&gt;
&lt;p&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Alissa Cooper, How Regulation and Competition Influence Discrimination in Broadband Traffic Management: A Comparative Study of Net Neutrality in 			the United States and the United Kingdom available at 			&lt;a href="http://ora.ox.ac.uk/objects/uuid:757d85af-ec4d-4d8a-86ab-4dec86dab568"&gt; http://ora.ox.ac.uk/objects/uuid:757d85af-ec4d-4d8a-86ab-4dec86dab568 &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn9"&gt;
&lt;p&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn10"&gt;
&lt;p&gt;&lt;a href="#_ftnref10" name="_ftn10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Christopher Parsons, "The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?", available at 			&lt;a href="https://www.christopher-parsons.com/the-politics-of-deep-packet-inspection-what-drives-surveillance-by-internet-service-providers/"&gt; https://www.christopher-parsons.com/the-politics-of-deep-packet-inspection-what-drives-surveillance-by-internet-service-providers/ &lt;/a&gt; at 15.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn11"&gt;
&lt;p&gt;&lt;a href="#_ftnref11" name="_ftn11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Ibid&lt;/i&gt; at 16.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn12"&gt;
&lt;p&gt;&lt;a href="#_ftnref12" name="_ftn12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn13"&gt;
&lt;p&gt;&lt;a href="#_ftnref13" name="_ftn13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Ibid&lt;/i&gt; at 19.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn14"&gt;
&lt;p&gt;&lt;a href="#_ftnref14" name="_ftn14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn15"&gt;
&lt;p&gt;&lt;a href="#_ftnref15" name="_ftn15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn16"&gt;
&lt;p&gt;&lt;a href="#_ftnref16" name="_ftn16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Jay Klein, "Digging Deeper Into Deep Packet Inspection (DPI)", available at			&lt;a href="http://spi.unob.cz/papers/2007/2007-06.pdf"&gt;http://spi.unob.cz/papers/2007/2007-06.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn17"&gt;
&lt;p&gt;&lt;a href="#_ftnref17" name="_ftn17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Tim Wu, "Network Neutrality: Broadband Discrimination", available at			&lt;a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=388863"&gt;http://papers.ssrn.com/sol3/papers.cfm?abstract_id=388863&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn18"&gt;
&lt;p&gt;&lt;a href="#_ftnref18" name="_ftn18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul Ferguson and Geoff Huston, "Quality of Service on the Internet: Fact, Fiction,&lt;/p&gt;
&lt;p&gt;or Compromise?", available at &lt;a href="http://www.potaroo.net/papers/1998-6-qos/qos.pdf"&gt;http://www.potaroo.net/papers/1998-6-qos/qos.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn19"&gt;
&lt;p&gt;&lt;a href="#_ftnref19" name="_ftn19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Barbara van Schewick, "Network Neutrality and Quality of Service: What a non-discrimination Rule should look like", available at 			&lt;a href="http://cyberlaw.stanford.edu/downloads/20120611-NetworkNeutrality.pdf"&gt; http://cyberlaw.stanford.edu/downloads/20120611-NetworkNeutrality.pdf &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn20"&gt;
&lt;p&gt;&lt;a href="#_ftnref20" name="_ftn20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra&lt;/i&gt; Note 14.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn21"&gt;
&lt;p&gt;&lt;a href="#_ftnref21" name="_ftn21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance," available at 			&lt;a href="http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf"&gt; http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn22"&gt;
&lt;p&gt;&lt;a href="#_ftnref22" name="_ftn22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ben Elgin and Bruce Einhorn, "The great firewall of China", available at 			&lt;a href="http://www.bloomberg.com/news/articles/2006-01-22/the-great-firewall-of-china"&gt; http://www.bloomberg.com/news/articles/2006-01-22/the-great-firewall-of-china &lt;/a&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn23"&gt;
&lt;p&gt;&lt;a href="#_ftnref23" name="_ftn23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Mike Wheatley, "Malaysia's Web Heavily Censored Before Controversial Elections", available at 			&lt;a href="http://siliconangle.com/blog/2013/05/06/malaysias-web-heavily-censored-before-controversial-elections/"&gt; http://siliconangle.com/blog/2013/05/06/malaysias-web-heavily-censored-before-controversial-elections/ &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn24"&gt;
&lt;p&gt;&lt;a href="#_ftnref24" name="_ftn24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Fazal Majid, "Deep packet inspection rears it ugly head" available at			&lt;a href="https://majid.info/blog/telco-snooping/"&gt;https://majid.info/blog/telco-snooping/&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn25"&gt;
&lt;p&gt;&lt;a href="#_ftnref25" name="_ftn25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Alissa Cooper, "Doing the DPI Dance: Assessing the Privacy Impact of Deep Packet Inspection," in W. Aspray and P. Doty (Eds.), Privacy in America: 			Interdisciplinary Perspectives, Plymouth, UK: Scarecrow Press, 2011 at 151.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn26"&gt;
&lt;p&gt;&lt;a href="#_ftnref26" name="_ftn26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Ibid&lt;/i&gt; at 148.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn27"&gt;
&lt;p&gt;&lt;a href="#_ftnref27" name="_ftn27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Kevin Werbach, "Breaking the Ice: Rethinking Telecommunications Law for the Digital Age", Journal of Telecommunications and High Technology, 			available at &lt;a href="http://www.jthtl.org/articles.php?volume=4"&gt;http://www.jthtl.org/articles.php?volume=4&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn28"&gt;
&lt;p&gt;&lt;a href="#_ftnref28" name="_ftn28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra &lt;/i&gt; Note 25 at 149.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn29"&gt;
&lt;p&gt;&lt;a href="#_ftnref29" name="_ftn29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra &lt;/i&gt; Note 25 at 147.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn30"&gt;
&lt;p&gt;&lt;a href="#_ftnref30" name="_ftn30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; International Telecommunications Union, Recommendation ITU-T.Y.2770, Requirements for Deep Packet Inspection in next generation networks, available 			at &lt;a href="https://www.itu.int/rec/T-REC-Y.2770-201211-I/en"&gt;https://www.itu.int/rec/T-REC-Y.2770-201211-I/en&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn31"&gt;
&lt;p&gt;&lt;a href="#_ftnref31" name="_ftn31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra &lt;/i&gt; Note 25 at 154.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn32"&gt;
&lt;p&gt;&lt;a href="#_ftnref32" name="_ftn32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Ibid&lt;/i&gt; at 156.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn33"&gt;
&lt;p&gt;&lt;a href="#_ftnref33" name="_ftn33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Supra&lt;/i&gt; Note 10.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn34"&gt;
&lt;p&gt;&lt;a href="#_ftnref34" name="_ftn34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance", available at 			&lt;a href="http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf"&gt; http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf &lt;/a&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn35"&gt;
&lt;p&gt;&lt;a href="#_ftnref35" name="_ftn35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.nist.gov/nstic/NSTIC-FIPPs.pdf"&gt;http://www.nist.gov/nstic/NSTIC-FIPPs.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn36"&gt;
&lt;p&gt;&lt;a href="#_ftnref36" name="_ftn36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm"&gt; https://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn37"&gt;
&lt;p&gt;&lt;a href="#_ftnref37" name="_ftn37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; "India's Surveillance State" Software Freedom Law Centre, available at 			&lt;a href="http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/"&gt; http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/ &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn38"&gt;
&lt;p&gt;&lt;a href="#_ftnref38" name="_ftn38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Amber Sinha, "Are we losing our right to privacy and freedom on speech on Indian Internet", DNA, available at 			&lt;a href="http://www.dnaindia.com/scitech/column-are-we-losing-the-right-to-privacy-and-freedom-of-speech-on-indian-internet-2187527"&gt; http://www.dnaindia.com/scitech/column-are-we-losing-the-right-to-privacy-and-freedom-of-speech-on-indian-internet-2187527 &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn39"&gt;
&lt;p&gt;&lt;a href="#_ftnref39" name="_ftn39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://cis-india.org/telecom/use-of-dpi-technology-by-isps.pdf"&gt;http://cis-india.org/telecom/use-of-dpi-technology-by-isps.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ftn40"&gt;
&lt;p&gt;&lt;a href="#_ftnref40" name="_ftn40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Smita Mujumdar, "Use of DPI Technology by ISPs - Response by the Department of Telecommunications" available at 			&lt;a href="http://cis-india.org/telecom/dot-response-to-rti-on-use-of-dpi-technology-by-isps"&gt; http://cis-india.org/telecom/dot-response-to-rti-on-use-of-dpi-technology-by-isps &lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/deep-packet-inspection-how-it-works-and-its-impact-on-privacy'&gt;https://cis-india.org/internet-governance/blog/deep-packet-inspection-how-it-works-and-its-impact-on-privacy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2016-12-16T23:14:49Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017">
    <title>Discussion on Ranking Digital Rights in India (Delhi, January 07)</title>
    <link>https://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017</link>
    <description>
        &lt;b&gt;Towards developing an understanding of how Indian ICT companies are recognising and upholding digital rights of their users, and to raise public awareness about the same, the Center for Internet and Society (CIS), with the support of Privacy International, has studied 8 Indian ICT companies, using the same methodology as the 2015 Corporate Accountability Index, to gain greater insight into company practices and initiate public dialogues. Please join us on Saturday, January 07, at the India Islamic Cultural Centre, New Delhi, for a presentation of our findings followed by an open structured discussion on the methodology and implications of the study.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Download: &lt;a href="https://github.com/cis-india/website/raw/master/docs/CIS_RDRIndia-Discussion_07012017_Invitation.pdf"&gt;Invitation and agenda&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The &lt;a href="https://rankingdigitalrights.org/"&gt;Ranking Digital Rights Corporate Responsibility Index&lt;/a&gt; is a project hosted by the Open Technology Institute at New America Foundation that aims to rank Information and Communications Technology (ICTs) companies with respect to their Governance, Freedom of Expression, and Privacy practices. The inaugural Corporate Accountability Index, released in November 2015, evaluated 16 companies based on the project’s methodology that included 31 indicators in total.&lt;/p&gt;
&lt;p&gt;Towards developing an understanding of how Indian ICT companies are recognising and upholding digital rights of their users, and to raise public awareness about the same, the Center for Internet and Society (CIS), with the support of &lt;a href="https://privacyinternational.org/"&gt;Privacy International&lt;/a&gt;, has studied 8 Indian ICT companies, using the same methodology as the 2015 Corporate Accountability Index, to gain greater insight into company practices and initiate public dialogues.&lt;/p&gt;
&lt;p&gt;Please join us on Saturday, January 07, at the India Islamic Cultural Centre, New Delhi, for a presentation of our findings followed by an open structured discussion on the methodology and implications of the Ranking Digital Rights study. We will begin at 10:30 am with a round of tea and coffee.&lt;/p&gt;
&lt;p&gt;The event is open to all but the venue has limited space. The participants are requested to RSVP by sending an email to &lt;a href="mailto:nisha@cis-india.org?subject=RSVP: Ranking Digital Rights Discussion"&gt;nisha@cis-india.org&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To further encourage programmers, researchers, journalists, students, and users in general to use and contribute to the findings of the Ranking Digital Rights study, and critique the underlying methodology, we are also organising a “rankathon” on Sunday, January 08, at the CIS office in Delhi. More details can be found &lt;a href="http://cis-india.org/internet-governance/events/rankathon-on-digital-rights-delhi-jan-08-2017"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We look forward to your participation and contribution to the discussion. Please support us by sharing this invitation with your colleagues and networks.&lt;/p&gt;
&lt;h2&gt;Agenda&lt;/h2&gt;
&lt;table class="plain"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;10:30-11:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Coffee and Tea&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;11:00-11:15&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;11:15-13:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Presentation of the Findings and Discussion&lt;/strong&gt; &lt;em&gt;Divij Joshi and Aditya Singh Chawla&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;13:00-14:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lunch&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;14:00-15:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Open Discussion #1: Parameters of Evaluation&lt;/strong&gt;&lt;br /&gt;The RDR methodology was based upon evaluating commitments to uphold human rights through their services – in particular towards their commitment to users’ freedom of expression and privacy. Are there other parameters that may be considered in the Indian context?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;15:00-16:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Open Discussion #2: Towards Protecting Digital Rights&lt;/strong&gt;&lt;br /&gt;What steps can be taken by the government, civil society, and industry in India to create an environment that recognizes and protects users digital rights? What are the relevant legal, political, and economic factors to take into consideration towards this? What are steps that other, multinational ICT companies have taken? Would these be realistic for Indian companies to implement?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;16:00-16:30&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;16:30-17:00&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Coffee and Tea&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017'&gt;https://cis-india.org/internet-governance/events/discussion-on-ranking-digital-rights-in-india-delhi-jan-07-2017&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Ranking Digital Rights</dc:subject>
    
    
        <dc:subject>Digital Rights</dc:subject>
    

   <dc:date>2016-12-29T07:07:34Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/new-media-personalisation-and-the-role-of-algorithms">
    <title>New Media, personalisation and the role of algorithms</title>
    <link>https://cis-india.org/internet-governance/new-media-personalisation-and-the-role-of-algorithms</link>
    <description>
        &lt;b&gt;In his much acclaimed book, The Filter Bubble, Eli Pariser explains how personalisation of services on the web works and laments that they are creating individual bubbles for each user, which run counter to the idea of the Internet as an inherently open place. While Pariser’s book looks at the practices of various large companies providing online services, he briefly touches upon the role of new media such as search engines and social media portals in new curation. Building upon Pariser’s unexplored argument, this article looks at the impact of algorithmic decision-making and Big Data in the context of news reporting and curation.&lt;/b&gt;
        &lt;em&gt;&lt;br /&gt;&lt;/em&gt;
&lt;blockquote&gt;
&lt;div&gt;
&lt;div&gt;&lt;em&gt;Everything which bars freedom and fullness of communication sets up barriers that divide human beings into sets and cliques, into antagonistic sects and factions, and thereby undermines the democratic way of life. &lt;/em&gt;—John Dewey&lt;/div&gt;
&lt;/div&gt;
&lt;/blockquote&gt;
&lt;p&gt;&amp;nbsp;Eli Pariser, in his book, The Filter Bubble,[1] refers to the scholarship by Walter Lippmann and John Dewey as integral to the evolution of the understanding of the democratic and ethical duties of the Fourth Estate. Lippmann was disillusioned by the role of newspapers in propaganda for the First World War. He responded with three books in quick succession — Liberty and the News,[2] Public Opinion[3] and The Phantom Public.[4] Lippmann brought attention the fact that the process of news-reporting was conducted through privately determined and unexamined standards. The failure of the Fourth Estate to perform its democratic functions, was, in the opinion of Lippmann, one of the prime factors responsible for the public not being an informed and rational entity. John Dewey, while rejecting Lippmann’s arguments that matters of public policy can only be determined by inside experts with training and education, did acknowledge the his critique of the media.&lt;/p&gt;
&lt;p&gt;Pariser points to the creation of a wall between editorial decisionmaking and advertiser interests, as the eventual result of the Lippmann and Dewey debate. While accepting that this division between the financial and reporting sides of media houses has not been always observed, Pariser emphasises that the fact that the standard exists is important.[5] Unlike traditional media, the new media which relies on algorithmic decision-making for personalisation is not subject to the same standards which try to mitigate the influence of commercial interests on editorial decisions while performing many of the same functions as the traditional media.[6] &amp;nbsp;&lt;/p&gt;
&lt;h3&gt;How personalisation algorithms work&lt;/h3&gt;
&lt;p dir="ltr"&gt;Kevin Slavin, at his famous talk in the TEDGLobal Conference, characterised algorithms as “maths that computers use to decide stuff” and that it was infiltrating every aspect of our lives.[7] According to Slavin’s view, algorithms can be seen as control technologies and shape our world constantly through media and information systems, dynamically modifying content and function through these programmed routines. Search engines and social media platforms perpetually rank user-generated content through algorithms.[8]&lt;/p&gt;
&lt;p&gt;Personalisation technologies have various advantages. It translates into more relevant content, which for service providers means more clicks and revenue and for consumer, less time spent on finding the content.[9] However, it also leads to privacy compromise, lack of control and reduced individual capability.[10] Search engines like Google use the famous PageRank algorithm, which combined with geographical location and previous searches yields most relevant search results.[11] PageRank algorithm uses various real time variables dependent on both voluntary and involuntary user inputs. These variables include number of clicks, number of occurrences of the key terms and number of references by other credible pages etc. This data in turn determines the order of pages in search results and influences the way we perceive, understand and analyse information.[12] Maps showing real time traffic information retrieve data from laser and infrared sensors alongside the road and from information from devices of users. Once this real time data is combined with historical trends, these maps recommend rout to every user, hence influencing the traffic patterns.[13]&lt;/p&gt;
&lt;p&gt;Even though this phenomenon of personalization may appears to be new, it has been prevalent in the society for ages.[14] The history of mass media culture clearly shows personalization has always been a method to increase market, market reach and customer satisfaction.[15] Newspapers have sections dedicated to special topics, radio and TV have channels dedicated to different interest groups, age groups and consumers.[16] These personalised sections in a newspaper and personalised channels on radio and television don’t just provide greater satisfaction to the readers or listeners or consumers, they also provide targeted advertisement space for the advertisers and content developers. However, digital footprints and mass collection of data have made this phenomenon much more granular and detailed. Geographical location of an individual can tell a lot about their community, their culture and other important traits local to a community.[17] This data further assists in personalisation. Current developments in technology not only help in better collection of data about personal preferences but also help in better personalisation.&lt;/p&gt;
&lt;p&gt;Pariser mentions three ways in which the personalization technologies of this day are different from those of the past. First, for the very first time, individuals are alone in the filter bubble. While in traditional forms of personalisation, there were various individuals who shared the same frame of reference, now there is a separate sets of filters governing the dissemination of content to each individual.[18] Second, the personalisation technologies are entirely invisible now, and there is little that consumers can do to control or modify them.[19] Third, often the decision to be subject to these personalisation technologies is not an informed choice. A good example of this would be an individual’s geographical location.[20]&lt;/p&gt;
&lt;h3&gt;The neutrality of New Media?&lt;/h3&gt;
&lt;p dir="ltr"&gt;More and more, we have noticed personalisation technologies having an impact on how we consume news on the Internet. Google News, Facebook’s News Feed which tries to put together a dynamic feed for both personal and global stories, and Twitter’s trending hashtag feature, have brought forward these services are key drivers of an emerging news ecosystem. Initially, this new media was hailed as a natural consequence of the Internet which would enable greater public participation, allow journalists to find more stories and engage with the readers directly. &amp;nbsp;An illustration of the same could be seen in the way Internet based news media and social networking websites behaved in the aftermath of Israel’s attacks on a United Nations run school in Gaza strip. While much of the international Internet media covered the story, Israel’s home media did not cover the story. The only exception to this was the liberal Israeli news website Ha’aretz.[21] Network graph details of Twitter, for a few days immediately after the incident clearly show the social media manifestation of the event in the personalised cyberspace. It is clearly visible that when most of the word was re-tweeting news of this heinous act of Israel, Israeli’s hardly re-tweeted this news. In fact they were busty re-tweeting the news of rocket attacks on Israel.[22]&lt;/p&gt;
&lt;p&gt;The use of social media in newsmaking was hailed by many scholars as symptomatic of the decentralisation characteristic of the Internet. It has been seen as movement towards greater grassroots participation by negating the ‘gatekeeping’ role traditionally played by editors. &amp;nbsp;Thomas Poell and José van Dijck punch holes in theory of social media and other online technologies as mere facilitators of user participation and translators of user preferences through Big Data analytics.[23] They quote T. Gillespie’s work which talks of the narrative of these online services as platforms which are “open, neutral, egalitarian and progressive support for activity.”[24]&lt;/p&gt;
&lt;p&gt;Pedro Domingos calls the overwhelming number of choices as the defining problem of the information age, and machine learning and data analytics as the largest part of this solution.[25] The primary function of algorithmic decision making in the context of consumption of content is to narrow down the choices. Domingos is more optimistic about the impact of these technologies, and he says “last step of the decision is usually still for humans to make, but learners intelligently reduce the choices to something a human can manage.”[26] On the other hand, Pariser is more circumspect about the coercive result of machine learning algorithms. Whichever way we lean, we have to accept that a large part of personalisation algorithms is to select and prioritize content by categorising it on the basis of relevance and popularity. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;Poell and van Dijck call this a new knowledge logic which in effect replaces human judgement (as, earlier exercised by editors) to some kind of proxy decisionmaking based on data. Their main thesis is that there is little evidence to suggest that the latter is more democratic than former and creates new problems of its own. They go on to compare the practices of various services including Facebook’s new graph and Twitter’s trending topic, and conclude that they prioritise breaking news stories over other kinds of content.[27] For instance, the algorithm for the trending topics depends not on the volume but the velocity of the tweets with the hashtag or term. It could be argued that given this predilection, the algorithms will rarely prefer complex content. If we go by Lippmann and Dewey’s idea that the role of the Fourth Estate is to inform public debate and accountability of those in positions of power, this aspect of Big Data algorithms does not correspond with this role.&lt;/p&gt;
&lt;h3&gt;Quantified Audience&lt;/h3&gt;
&lt;p dir="ltr"&gt;Another aspect of use of Big Data and algorithms in New Media that requires attention is that the networked infrastructure enables a quantified audience. C W Anderson who has studied newsroom practices in the US looked at role played by audience quantification and rationalization in shifting newswork practices. He concluded that more and more, journalists are less autonomous in their news decisions and increasingly reliant on audience metrics as a supplement to news &amp;nbsp;judgment.[28] Poell and van Dijck review the the practices by some leading publications such a New York Times, L.A. Times and Huffington Post, and degree to which audience metrics &amp;nbsp;dictates editorial decisions. While New York Times seems to prioritise content on their social media portals based on expectation of spike in user traffic, L.A. Times goes one step further by developing content specifically aimed towards promoting greater social participation. Neither of these practices though compare to the reliance on SEO and SMO strategies of web-born news providers like Huffington Post. They have traffic editors who trawl the Internet for trending topics and popular search terms, the feedback from them dictates the content creation.[29]&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p dir="ltr"&gt;The above factors demonstrate that the idea of New Media leading to the Fourth Estate performing its democratic functions does not take into account the actual practices. This idea is based on the erroneous assumption that technology, in general and algorithms, in particular are neutral. While the emergence of New Media might have reduced the gatekeeping role played by the editors, its strong prioritisation of content that will be popular reduce the validity of arguments that it leads to more informed public discussion. As Pariser said, the traditional media scores over the New Media inasmuch as there is an existence of a standard of division between editorial decisionmaking and advertiser interest. While this standard is flouted by media houses all the time, it exists as a metric to aspire to and measure service providers against. The New Media performs many of the same functions and maybe it is time to evolve some principles and ethical standards that take into account the need for it to perform these democratic functions.&lt;/p&gt;
&lt;h3&gt;Endnotes&amp;nbsp;&lt;/h3&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt; Eli Pariser, The Filter Bubble: What the Internet is
hiding from you (The Penguin Press, New York, 2011)&amp;nbsp;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;span class="MsoFootnoteReference"&gt;&lt;span class="MsoFootnoteReference"&gt;[2]&lt;/span&gt;&lt;/span&gt;&amp;nbsp;Walter Lippmann, Liberty and News (Harcourt, Brace
and Howe, New York 1920) available at&lt;a href="https://archive.org/details/libertyandnews01lippgoog"&gt;https://archive.org/details/libertyandnews01lippgoog&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt; Walter Lippmann, Public Opinion (Harcourt, Brace and
Howe, New York 1920) available at &lt;a href="http://xroads.virginia.edu/~Hyper2/CDFinal/Lippman/cover.html"&gt;http://xroads.virginia.edu/~Hyper2/CDFinal/Lippman/cover.html&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt; Walter Lippmann, The Phantom Public (Transaction
Publishers, New York, 1925)&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
1 at 35.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
1 at 36.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt; &lt;a href="https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world/transcript?language=en"&gt;https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world/transcript?language=en&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt; Fenwick McKelvey, “Algorithmic Media Need Democratic
Methods: Why Publics Matter”, available at &lt;a href="http://www.fenwickmckelvey.com/wp-content/uploads/2014/11/2746-9231-1-PB.pdf"&gt;http://www.fenwickmckelvey.com/wp-content/uploads/2014/11/2746-9231-1-PB.pdf&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt; &lt;a href="http://mashable.com/2011/06/03/filters-eli-pariser/#9tIHrpa_9Eq1"&gt;http://mashable.com/2011/06/03/filters-eli-pariser/#9tIHrpa_9Eq1&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt; Helen Ashman, Tim Brailsford, Alexandra Cristea, Quan
Z Sheng, Craig Stewart, Elaine Torns and Vincent Wade, “The ethical and social
implications of personalization technologies for e-learning” available at &lt;a href="http://www.sciencedirect.com/science/article/pii/S0378720614000524"&gt;http://www.sciencedirect.com/science/article/pii/S0378720614000524&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt; Sergey Brin and Lawrence Page, “The Anatomy of a
Large-Scale Hypertextual Web Search Engine” available at &lt;a href="http://infolab.stanford.edu/pub/papers/google.pdf"&gt;http://infolab.stanford.edu/pub/papers/google.pdf&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt; Ian Rogers, “The Google Pagerank Algorithm and How It
Works” available at &lt;a href="http://www.cs.princeton.edu/~chazelle/courses/BIB/pagerank.htm"&gt;http://www.cs.princeton.edu/~chazelle/courses/BIB/pagerank.htm&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt; Trygve Olson and Terry Nelson, “The Internet’s Impact
on Political Parties and Campaigns”, available at &lt;a href="http://www.kas.de/wf/doc/kas_19706-544-2-30.pdf?100526130942"&gt;http://www.kas.de/wf/doc/kas_19706-544-2-30.pdf?100526130942&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt; Ian Witten, “Bias, privacy and and personalisation on
the web”, available at &lt;a href="http://www.cs.waikato.ac.nz/~ihw/papers/07-IHW-Bias,privacyonweb.pdf"&gt;http://www.cs.waikato.ac.nz/~ihw/papers/07-IHW-Bias,privacyonweb.pdf&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
1 at 10.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt; &lt;a href="https://www.americanpressinstitute.org/publications/reports/survey-research/social-demographic-differences-news-habits-attitudes/"&gt;https://www.americanpressinstitute.org/publications/reports/survey-research/social-demographic-differences-news-habits-attitudes/&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt; Charles Heatwole, “Culture: A Geographical Perspective”
available at &lt;a href="http://www.p12.nysed.gov/ciai/socst/grade3/geograph.html"&gt;http://www.p12.nysed.gov/ciai/socst/grade3/geograph.html&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
1 at 10.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Id&lt;/em&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
1 at 11.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt; Paul Mason, “Why Israel is losing the social media
war over Gaza?” available at &lt;a href="http://blogs.channel4.com/paul-mason-blog/impact-social-media-israelgaza-conflict/1182"&gt;http://blogs.channel4.com/paul-mason-blog/impact-social-media-israelgaza-conflict/1182&lt;/a&gt;.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt; Gilad Lotan, Israel, Gaza, War &amp;amp; Data: Social
Networks and the Art of Personalizing Propaganda available at &lt;a href="http://www.huffingtonpost.com/entry/israel-gaza-war-social-networks-data_b_5658557.html"&gt;www.huffingtonpost.com/entry/israel-gaza-war-social-networks-data_b_5658557.html&lt;/a&gt;&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt; Thomas Poell and José van Dijck, “Social Media and
Journalistic Independence” in Media Independence: Working with Freedom or
Working for Free?, edited by James Bennett &amp;amp; Niki Strange. (Routledge,
London, 2015)&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt; T Gillespie, “The politics of ‘platforms,” in New
Media &amp;amp; Society (Volume 12, Issue 3).&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt; Pedro Domingos, The Master Algorithm: How the quest
for the ultimate learning machine will re-make the world (Basic Books, New
York, 2015) at 38.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Ibid&lt;/em&gt; at 40.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra&lt;/em&gt; Note
23.&lt;/p&gt;
&lt;p class="normal"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt; C W Anderson, Between creative and quantified
audiences: Web metrics and changing patterns of newswork in local US newsrooms,
available at &lt;a href="https://www.academia.edu/10937194/Between_Creative_And_Quantified_Audiences_Web_Metrics_and_Changing_Patterns_of_Newswork_in_Local_U.S._Newsrooms"&gt;https://www.academia.edu/10937194/Between_Creative_And_Quantified_Audiences_Web_Metrics_and_Changing_Patterns_of_Newswork_in_Local_U.S._Newsrooms&lt;/a&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;
&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt; &lt;em&gt;Supra &lt;/em&gt;Note 23.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;span id="docs-internal-guid-24b4db2a-a606-d425-16ff-1d76b980367d"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/new-media-personalisation-and-the-role-of-algorithms'&gt;https://cis-india.org/internet-governance/new-media-personalisation-and-the-role-of-algorithms&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Human Rights</dc:subject>
    
    
        <dc:subject>Big Data</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Machine Learning</dc:subject>
    
    
        <dc:subject>Algorithms</dc:subject>
    
    
        <dc:subject>New Media</dc:subject>
    

   <dc:date>2017-01-16T07:20:52Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/epw-amber-sinha-may-18-2018-for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good">
    <title>India's Data Protection Framework Will Need to Treat Privacy as a Social and Not Just an Individual Good</title>
    <link>https://cis-india.org/internet-governance/blog/epw-amber-sinha-may-18-2018-for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good</link>
    <description>
        &lt;b&gt;The idea that technological innovations may compete with privacy of individuals assumes that there is social and/or economic good in allowing unrestricted access to data. However, it must be remembered that data is potentially a toxic asset, if it is not collected, processed, secured and shared in the appropriate way.&lt;/b&gt;
        &lt;div class="field-label-hidden      field-type-text-with-summary field-name-body field" style="text-align: justify; "&gt;
&lt;div class="field-items"&gt;
&lt;div class="even field-item"&gt;
&lt;p&gt;Published in Economic &amp;amp; Political Weekly, Volume 53, Issue No. 18, 05 May, 2018. Article can be &lt;a class="external-link" href="http://www.epw.in/engage/article/for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good"&gt;accessed online here&lt;/a&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;In             July 2017, the Ministry of Electronics and Information             Technology (MeITy) in India set up a committee headed by a             former judge, B N Srikrishna, to address the growing clamour             for privacy protections at a time when both private             collection of data and public projects like Aadhaar are             reported to pose major privacy risks (Maheshwari 2017). The             Srikrishna Committee is in the process of providing its             input, which will go on to inform India’s data-protection             law.&lt;/p&gt;
&lt;p&gt;While             the committee released a white paper with provisional views,             seeking feedback a few months ago, it may be discussing a             data protection framework without due consideration to how             data practices have evolved.&lt;/p&gt;
&lt;p&gt;In             early 2018, a series of stories based on investigative             journalism by &lt;em&gt;Guardian&lt;/em&gt;and &lt;em&gt;Observer&lt;/em&gt; revealed             that the data of 87 million Facebook users was used for the             Trump campaign by a political consulting firm, Cambridge             Analytica, without their permissions. Aleksandr Kogan, a             psychology researcher at the University of Cambridge,             created an application called “thisisyourdigitallife” and             collected data from 270,000 participants through a             personality test using Facebook’s application programming             interface (API), which allows developers to integrate with             various parts of the Facebook platform (Fruchter et al             2018). This data was collected purportedly for academic             research purposes only. Kogan’s application also collected             profile data from each of the participants’ friends, roughly             87 million people.&lt;/p&gt;
&lt;p&gt;The             kinds of practices concerning the sharing and processing of             data exhibited in this case are not unique. These are, in             fact, common to the data economy in India as well. It can be             argued that the Facebook–Cambridge Analytica incident is             representative of data practices in the data-driven digital             economy. These new practices pose important questions for             data protection laws globally, and how these may need to             evolve to address data protection, particularly for India,             which is in the process of drafting its own data protection             law.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Privacy as Control&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Most             modern data protection laws focus on individual control. In             this context, the definition by the late Alan Westin             (2015) characterises privacy as:&lt;/p&gt;
&lt;blockquote style="padding-left: 20px; "&gt;
&lt;p&gt;The claim               of individuals, groups, or institutions to determine for               themselves when, how, and to what extent information about               them is communicated to other.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The             idea of “privacy as control” is what finds articulation in             data protection policies across jurisdictions, beginning             with the Fair Information Practice Principles (FIPP) from             the United States (US) (Dixon 2006). These FIPPs are the             building blocks of modern information privacy law (Schwartz             1999) and not only play a significant role in the             development of privacy laws in the US, but also inform data             protection laws in most privacy regimes internationally             (Rotenberg 2001), including the nine “National Privacy             Principles” articulated by the Justice A P Shah Committee in             India. Much of this approach is also reflected in the white             paper released by the committee, led by Justice Srikrishna,             towards the creation of data protection laws in India             (Srikrishna 2017)&lt;/p&gt;
&lt;p&gt;This             approach essentially involves the following steps (Cate             2006):&lt;/p&gt;
&lt;p&gt;(i)             Data controllers are required to tell individuals what data             they wish to collect and use and give them a choice to share             the data. &lt;br /&gt; (ii) Upon sharing, the individuals have rights such as being             granted access, and data controllers have obligations such             as securing the data with appropriate technologies and             procedures, and only using it for the purposes identified.&lt;/p&gt;
&lt;p&gt;The             objective in this approach is to make the individual             empowered and allow them to weigh their own interests in             exercising their consent. The allure of this paradigm is             that, in one elegant stroke, it seeks to “ensure that             consent is informed and free and thereby also (seeks) to             implement an acceptable tradeoff between privacy and             competing concerns.” (Sloan and Warner 2014). This approach             is also easy to enforce for both regulators and businesses.             Data collectors and processors only need to ensure that they             comply with their privacy policies, and can thus reduce             their liability while, theoretically, consumers have the             information required to exercise choice. In recent years,             however, the emergence of big data, the “Internet of             Things,” and algorithmic decision-making has significantly             compromised the notice and consent model (Solove 2013).&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Limitations of Consent &lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Some             cognitive problems, such as long and difficult to understand             privacy notices, have always existed with regard to the             issue of informed consent, but lately these problems have             become aggravated. Privacy notices often come in the form of             long legal documents, much to the detriment of the readers’             ability to understand them. These policies are “long,             complicated, full of jargon and change frequently” (Cranor             2012).&lt;/p&gt;
&lt;p&gt;Kent             Walker (2001) lists five problems that privacy notices             typically suffer from:&lt;/p&gt;
&lt;p&gt;(i)             Overkill: Long and repetitive text in small print.&lt;br /&gt; (ii) Irrelevance: Describing situations of little concern to             most consumers.&lt;br /&gt; (iii) Opacity: Broad terms that reflect limited truth, and             are unhelpful to track and control the information collected             and stored.&lt;br /&gt; (iv) Non-comparability: Simplification required to achieve             comparability will lead to compromising of accuracy.&lt;br /&gt; (v) Inflexibility: Failure to keep pace with new business             models.&lt;/p&gt;
&lt;p&gt;Today,             data is collected continuously with every use of online             services, making it humanly impossible to exercise             meaningful consent. &lt;br /&gt; The quantity of data being generated is expanding at an             exponential rate. With connected devices, smartphones,             appliances transmitting data about our usage, and even the             smart cities themselves, data now streams constantly from             almost every sector and function of daily life, “creating             countless new digital puddles, lakes, tributaries and oceans             of information” (Bollier 2010).&lt;/p&gt;
&lt;p&gt;The             infinitely complex nature of the data ecosystem renders             consent of little value in cases where individuals may be             able to read and comprehend privacy notices. As the uses of             data are so diverse, and often not limited by a purpose             identified at the beginning, individuals cannot             conceptualise how their data will be aggregated and possibly             used or reused.&lt;/p&gt;
&lt;p&gt;Seemingly             innocuous bits of data revealed at different stages could be             combined to reveal sensitive information about the             individual. While the regulatory framework is designed such             that individuals are expected to engage in cost–benefit             analysis of trading their data to avail services, this             ecosystem makes such individual analysis impossible.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Conflicts Between Big Data               and Individual Control&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The             thrust of big data technologies is that the value of data             resides not in its primary purposes, but in its numerous             secondary purposes, where data is reused many times over             (Schoenberger and Cukier 2013).&lt;/p&gt;
&lt;p&gt;On             the other hand, the idea of privacy as control draws from             the “data minimisation” principle, which requires             organisations to limit the collection of personal data to             the minimum extent necessary to obtain their legitimate             purpose and to delete data no longer required. Control is             excercised and privacy is enhanced by ensuring data             minimisation. These two concepts are in direct conflict.             Modern data-driven businesses want to retain as much data as             possible for secondary uses. Since these secondary uses are,             by their nature, unanticipated, their practices run counter             to the very principle of purpose limitation (Tene and             Polonetsky 2012).&lt;/p&gt;
&lt;p&gt;It             is evident from such data-sharing practices, as demonstrated             by the Cambridge Analytica–Facebook story, that platform             architectures are designed with a clear view to collect as             much data as possible. This is amply demonstrated by the             provision of a “friends permission” feature by Facebook on             its platform to allow individuals to share information not             just about themselves, but also about their friends. For the             principle of informed consent to be meaningfully             implemented, it is necessary for users to have access to             information about intended data practices, purposes and             usage, so they consciously share data about themselves.&lt;/p&gt;
&lt;p&gt;In             reality, however, privacy policies are more likely to serve             as liability disclaimers for companies than any kind of             guarantee of privacy for consumers. A case in point is Mark             Zuckerberg’s facile claim that there was no “data-breach" in             the Cambridge Analytica–Facebook incident. Instead of asking             each of the 87 million users whether they wanted their data             to be collected and shared further, Facebook designed a             platform that required consent in any form only from 270,000             users. Not only were users denied the opportunity to give             consent, their consent was assumed through a feature which             was on by default. This is representative of how privacy             trade-offs are conceived by current data-driven business             models. Participation in a digital ecosystem is by itself             deemed as users’ consent to relinquish control over how             their data is collected, who may have access to it, and what             purposes it may be used for.&lt;/p&gt;
&lt;p&gt;Yet,             Zuckerberg would have us believe that the primary privacy             issue of concern is not about how his platform enabled the             collection of users’ data without their explicit consent,             but in the subsequent unauthorised sharing of the data by             Kogan. Zuckerberg’s insistence that collection of data of             people without their consent is not a data breach is             reminiscent of the UIDAI’s recent claims in India that             publication of Aadhaar numbers and related information by             several government websites  is not a data breach, so long             as its central biometric database in secure (Sharma 2018).             In such cases also, the intended architecture ensured the             seeding of other databases with Aadhaar numbers, thus             creating multiple potential points of failure through             disclosure. Similarly, the design flaws in direct benefit             transfers enabled Airtel to create payments bank accounts             with the customers’ knowledge (&lt;em&gt;Hindu Business Line 2017&lt;/em&gt;). Such claims             clearly suggest the very limited responsibility data             controllers (both public and private) are willing to take             for personal data that they collect, while wilfully             facilitating and encouraging data practices which may lead             to greater risk to data.&lt;/p&gt;
&lt;p&gt;On             this note, it is also relevant to point out that the             Srikrishna committee white paper begins with identifying             informational privacy and data innovation as its two key             objectives. It states that “a firm legal framework for data             protection is the foundation on which data-driven innovation             and entrepreneurship can flourish in India.”&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Conversations             around privacy and data have become inevitably linked to the             idea of technological innovation as a competing interest.             Before engaging in such conversations, it is important to             acknowledge that the value of innovation as a competing             interest itself is questionable. It is not a competing             right, nor a legitimate public interest endeavour, nor a             proven social good.&lt;/p&gt;
&lt;p&gt;The             idea that in policymaking, technological innovations may             compete with privacy of individuals assumes that there is             social and/or economic good in allowing unrestricted access             to data. The social argument is premised on the promises of             mathematical models and computational capacity being capable             of identifying key insights from data. In turn, these             insights may be useful in public and private             decision-making. However, it must be remembered that data is             potentially a toxic asset, if it is not collected,             processed, secured and shared in the appropriate way.             Sufficient research suggests that indiscriminate data             collection is greatly increasing the ratio of noise to             signal, and can lead to erroneous insights. Further, the             greater the amount of data you collect, the greater is the             attack surface that leads to cybersecurity risks. Further,             incidents such as Facebook–Cambridge Analytica demonstrate             that toxicity of data in various ways and underscores the             need for data regulation at every stage of the data             lifecycle (Scheiner  2016). These are important tempering             factors that need to be kept in mind while evaluating data             innovation as a key mover of policy or regulation.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Privacy as Social Good&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;As             long as privacy is framed as arising primarily from             individual control, data controllers will continue to engage             in practices that compromise the ability to exercise choice.             There is a need to view privacy as a social good, and             policymaking should ensure its preservation and enhancement.             Contractual protections and legal sanctions can themselves             do little if platform architectures are designed to do the             exact opposite.&lt;/p&gt;
&lt;p&gt;More             importantly, policymaking needs to recognise privacy not             merely as an individual right, available for individuals to             forego when engaging with data-driven business models, but             also as a social good. The recognition of something as a             social good deems it desirable by definition, and a             legitimate goal of law and policy, rather than rely             completely on market forces for its achievement.&lt;/p&gt;
&lt;p&gt;The             Puttaswamy judgment (K Puttaswamy v Union of India             2017) lends sufficient weight to privacy’s social value by             identifying it as fundamental to any individual development             through its dependence on solitude, anonymity, and temporary             releases from social duties.&lt;/p&gt;
&lt;p&gt;Sociological             scholarship demonstrates that different types of social             relationships, be it Gesellschaft (interest groups and             acquaintances) or Gemeinschaft (friendship, love, and             marriage), and the nature of these relationships depend on             the ability to conceal certain things (Simmel 1906).             Demonstrating this in the context of friendships, it has             been stated that such relationships “present a very peculiar             synthesis in regard to the question of discretion, of             reciprocal revelation and concealment.” Friendships, much             like most other social relationships, are very much             dependent on our ability to selectively present ourselves to             others. Contrast this with Zuckerberg’s stated aim of making             the world more “open” where information about people flows             freely and effectively without any individual control.             Contrast this also with government projects such as the             Aadhaar which intends to act as one universal identity which             can provide a 360-degree view of citizens.&lt;/p&gt;
&lt;p&gt;Other             scholars such as Julie Cohen (2012) and Anita Allen (2011)             have demonstrated that data that a person produces or has             control over concerns both herself and others. Individuals             can be exposed not only because of their own actions and             choices, but also made vulnerable merely because others have             been careless with their data. This point is amply             demonstrated in the Facebook–Cambridge Analytica incident.             What this means is that protection of privacy requires not             just individual action, but in a sense, requires group             co-ordination. It is my argument that this group interest of             privacy as a social good must be the basis of policymaking             and regulation of data in the future, in addition to the             idea of privacy as an individual right. In the absence of             attention to the social good aspect of privacy, individual             consumers are left to their own devices to negotiate  their             privacy trade-offs with large companies and governments and             are significantly compromised.&lt;/p&gt;
&lt;p&gt;What             this translates into is a regulatory framework and data             protection frameworks should not be value-neutral in their             conception of privacy as a facet of individual control. The             complete reliance of data regulation on the data subject to             make an informed choice is, in my opinion, an idea that has             run its course. If privacy is viewed as a social good, then             the data protection framework, including the laws and the             architecture must be designed with a view to protect it,             rather than leave it entirely to the market forces.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Way Forward&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Data             protection laws need to be re-evaluated, and policymakers             must recognise Lawrence Lessig’s dictum that “code is law.”             Like laws, architecture and norms can play a fundamental             role in regulation. Regulatory intervention for technology             need not mean regulation of technology only, but also how             technology itself may be leveraged for regulation (Lessig             2006; Reidenberg 1998). It is key that the latter is not             left only in the hands of private players. &lt;br /&gt; Zuckerberg, in his testimony (&lt;em&gt;Washington Post&lt;/em&gt; 2018) before             the United States Senate's Commerce and Judiciary             committees, asserted that "AI tools" are central to any             strategy for addressing hate speech, fake news, and             manipulations that use data ecosystems for targeting.&lt;/p&gt;
&lt;p&gt;What             is most concerning in his testimony is the complete lack of             mention of standards, public scrutiny and peer-review             processes, which “AI tools” and regulatory technologies need             to be subject to. Further, it cannot be expected that             data-driven businesses will view privacy as a social good or             be publicly accountable.&lt;/p&gt;
&lt;p&gt;As             policymakers in India gear up for writing the country’s data             protection law, they must acknowledge that their             responsibility extends to creating norms and principles that             will inform future data-driven platforms and regulatory             technologies.&lt;/p&gt;
&lt;p&gt;Since             issues of privacy and data protection will have to be             increasingly addressed at the level of how architectures             enable data collection, and more importantly how data is             used after collection, policymakers must recognise that             being neutral about these practices is no longer enough.             They must take normative positions on data collection,             processing and sharing practices. These positions cannot be             implemented through laws only, but need to be translated             into technological solutions and norms.  Unless a             multipronged approach comprising laws, architecture and             norms is adopted, India’s new data protection regime may end             up with limited efficacy.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/epw-amber-sinha-may-18-2018-for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good'&gt;https://cis-india.org/internet-governance/blog/epw-amber-sinha-may-18-2018-for-indias-data-protection-regime-to-be-efficient-policymakers-should-treat-privacy-as-a-social-good&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-18T06:22:57Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/comments-on-information-technology-security-of-prepaid-payment-instruments-rules-2017">
    <title>Comments on Information Technology (Security of Prepaid Payment Instruments) Rules, 2017</title>
    <link>https://cis-india.org/internet-governance/blog/comments-on-information-technology-security-of-prepaid-payment-instruments-rules-2017</link>
    <description>
        &lt;b&gt;The Centre for Internet and Society submitted comments on the Information Technology (Security of Prepaid Payment Instruments) Rules, 2017. The comments were prepared by Udbhav Tiwari, Pranesh Prakash, Abhay Rana, Amber Sinha and Sunil Abraham. &lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;1. Preliminary&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;1.1. This submission presents comments by the Centre for Internet and Society&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/a&gt; in response to the Information Technology (Security of Prepaid Payment Instruments) Rules 2017 (“the Rules”).&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/a&gt; The Ministry of Electronics and Information Technology (MEIT) issued a consultation paper (pdf) which calls for developing a framework for security of digital wallets operating in the country on March 08, 2017. This proposed rules have been drafted under provisions of Information Technology Act, 2000, and comments have been invited from the general public and stakeholders before the enactment of these rules.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;2. The Centre for Internet and Society&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;2.1. The Centre for Internet and Society, (“CIS”), is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, and open access), internet governance, telecommunication reform, digital privacy, and cyber-security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;2.2. This submission is consistent with CIS’ commitment to safeguarding general public interest, and the interests and rights of various stakeholders involved, especially the privacy and data security of citizens. CIS is thankful to the MEIT for this opportunity to provide feedback to the draft rules.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;3. Comments&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;3.1  General Comments&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Penalty&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is no penalty for not complying with these rules.  Even the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 doesn’t have penalties.  Under section 43A of the Information Technology Act (under which the 2011 Rules have been promulgated), a wrongful gain or a wrongful loss needs to be demonstrated.  This should not be a requirement for financial sector.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Expansion to Contractual Parties.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A majority of these rules, in order to be effective and realistically protect consumer interest, should also be expanded to third parties, agents, contractual relationships and any other relevant relationship an e-PPI issuer may delegate as a part of their functioning.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.2  Rule 2: Definitions&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Certain key words relevant to the field of e-PPI based digital payments such as authorisation, metadata, etc. are not defined in the rules and should both be defined and accounted for in the rules to ensure modern developments such as big data and machine learning, digital surveillance, etc. do not violate human rights and consumer interest.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.2  Rule 7: Definition of personal information&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Rule 7 provides an exhaustive list of data that will be deemed to be personal information for the purposes of the Rules. While &lt;b&gt;information collected&lt;/b&gt; at the time of issuance of the pre-paid payment instrument and during its use is included within the scope of Rule 7, it makes no reference to metadata generated and collected by the e-PPI issuer.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.3 Rule 4: Inadequate privacy protections&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Rule 4(2) specifies the details that the privacy policies of each e-PPI issuer must contain. However, these specifications are highly inadequate and fall well below the recommendations under the National Privacy Principles in Report of the Group of Experts on Privacy chaired by Justice A P Shah.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestions: The Rules should include include clearly specified rights to access, correction and opt in/opt out, continuing obligations to seek consent in case of change in policy or purpose and deletion of data after purpose is achieved. Additionally, it must be required that a log of each version of past privacy policies be maintained along with the relevant period of applicability.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.4 Rule 10: Reasonable security practices&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: Financial information (“such as bank account or credit card or debit card or other payment instrument details”) is already invoked in an inclusive manner in the definition of ‘personal information’ in Rule 7.  Given this there is no need to make the Reasonable Security Practices Rules applicable to financial data through this provisions: it already is, and it is best to avoid unnecessary redundancy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Solution: This entire rule should be removed.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.5  Rule 12: Traceability&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: There is a requirement created under this rule that payment-related interactions with customers or other service providers be “appropriately trace[able]”.  But it is unclear what that would practically mean: would IP logging suffice? would IMEI need to be captured for mobile transactions? what is “appropriately” traceable? — none of those questions are answered.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestion: The NPCI’s practices and RBI regulations, for instance, seek to limit the amount of information that entities like e-PPI providers have.  These rules need to be brought in line with those practices and regulations.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.6 Rule 5: Risk Assessment&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Rule 5 requires e-PPI issuers to carry out risk assessments associated with the security of the payments systems at least once a year and after any major security incident. However, there are no transparency requirements such as publications of details of such review, a summary of the analysis, any security vulnerabilities discovered etc.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestion:&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;Broaden the scope of this provision to include not just risk assessments but also security audits.&lt;/li&gt;
&lt;li&gt;Mandate publication of risk assessment and security audit reports.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.7 Rule 11: End-to-End Encryption&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The rule concerning end-to-end encryption (E2E) needs significantly greater detailing to be effective in ensuring the the protection of information at both storage and transit.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestions: Elements such as Secure Element or a Secured Server and Trusted User Interface, both concepts to enable secure payments, can be detailed in the rule and a timeline can be established to require hardware, e-PPI practices and security standards to realistically account for such best practices to ensure modern, secure and industry accepted implementation of the rule.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.8 Rule 13: Retention of Information&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: Rule 13 leaves the question of retention entirely unanswered by deferring the future rulemaking to the Central Government.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestions: Rule 13 should be expanded to include the various categories of information that can be stored, guidelines for the short-term (fast access) and long-term storage of the information retained under the rule and other relevant details. The rule should also include the security standards that should be followed in the storage of such information, require access logs be maintained for whenever this information is accessed by individuals, detail secure destruction practices at the end of the retention period  and finally mandate that end users be notified by the e-PPI issuer of when such retained information is accessed in all situations bar exceptional circumstances such as national security, compromising an ongoing criminal investigations, etc.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.9 Rule 14: Reporting of Cyber Incidents&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Rule 14 is an excellent opportunity to uphold transparency, accountability and consumer rights by mandating time- and information-bound notification of cyber incidents to customers, including intrusions, database breaches and any other compromise of the integrity of the financial system. While the requirement of reporting such incidents to CERT-In is already present in the Rule 12 of the CERT Rules, the rule retains the optional nature of notifying customers. The rule should include an exhaustive list of categories or kinds of cyber incidents that should be reported to affected end users without compromising the investigation of such breaches by private organisations and public authorities. Further, the rule should also include penalties for non-compliance of this requirement (both to CERT-In and the consumer) to serve as an incentive for e-PPI issuers to uphold consumer public interest. The rule should be expanded to include a detailed mechanism for such reporting, including when e-PPI issuers and the CERT-In can withhold information from consumers as well as requiring the withheld information be disclosed when the investigation has been completed. Finally, the rule should also require that such disclosures be public in nature and consumers not be required to not disseminate such information to enable informed choice by the end user community.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Suggestion:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(1) In Rule 14(3) “may” should be substituted by “shall”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(2) Penalties of up to 5 lakh rupees may be imposed for each day that the e-PPI issuer fails to report any severe vulnerability that could likely result in harm to customers.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.10 Rule 15: Customer Awareness and Education&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: Rule 15 on Customer Awareness and Education by e-PPI issuers does not take into account the vast lingual diversity and varied socio-economic demographic that makes up the end users of e-PPI providers in India, by mandating the actions under the rule must account for these factors prior to be propagated.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Solutions: The rule must ensure that e-PPI issuers track record in carrying out awareness is regularly held accountable by both the government and public disclosures on their websites. Further, the rule can be made more concrete and effective by including mobile operating systems in their scope (along with equipments), mandating awareness for best practices for inclusive technologies like USSD banking, specifying notifications to include SMS reports of financial transactions, etc.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.11 Rule 16: Grievance Redressal&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: Rule 16 lays down the requirement of grievance redressal, without specifying appellate mechanisms (both within the organisation and at the regulatory level), accountability (via penalties) for non-compliance of the rule nor requiring a clear hierarchy of responsibility within the e-PPI organisation. These factors seriously compromise the efficacy of a grievance redressal framework.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Solutions: Similar rules for grievance redressal that have been enacted by the Insurance Regulatory and Development Authority for the insurance sector and the Telecom Regulatory Authority of India for the telecom sector can and should serve as a reference point for this rule. Their effectiveness and real world operation should also be monitored by the relevant authorities while ensuring sufficient flexibility exists in the rule to uphold consumer rights and the public interest. Proper appellate mechanisms at the regulatory level are essential along with penalties for non-compliance.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;3.12 Rule 17: Security Standards&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Problem: Rule 17 empowers the Central Government to mandate security standards to be followed by e-PPI issuers operating in India. While appreciable in its overall outlook on ensuring a minimum standard of security, the Rule needs be improved upon to make it more effective. This can be in done by specifying certain minimum security standards to ensure all e-PPI issuers have a minimal level of security, instead of leaving them open to being intimated at a later date.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Solutions: Standards that can either be made mandatory or be used as a reference point to create a new standard under Rule 17(2) are ISO/IEC 14443, IS 14202, ISO/IEC 7816, PCI DSS, etc. Further, the Rule should include penalties for non-compliance of these standards, to make them effectively enforceable by both the government and end users alike. Additional details like the maximum time period in which such security standards should be implemented post their notification, requiring regular third party audits to ensure continuing compliance and effectiveness and requiring updated standards be used upon their release will go a long way in ensuring e-PPI issuers fulfil their mandate under these Rules.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://cis-india.org/"&gt;http://cis-india.org/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://meity.gov.in/sites/upload_files/dit/files/draft-rules-security%20of%20PPI-for%20public%20comments.pdf"&gt;http://meity.gov.in/sites/upload_files/dit/files/draft-rules-security%20of%20PPI-for%20public%20comments.pdf&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/comments-on-information-technology-security-of-prepaid-payment-instruments-rules-2017'&gt;https://cis-india.org/internet-governance/blog/comments-on-information-technology-security-of-prepaid-payment-instruments-rules-2017&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    

   <dc:date>2017-03-23T01:54:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/the200b-200bfundamental200b-200bright200b-200bto200b-200bprivacy-200b-200bpart200b-200biii-scope">
    <title>The​ ​Fundamental​ ​Right​ ​to​ ​Privacy:​ ​Part​ ​III SCOPE</title>
    <link>https://cis-india.org/internet-governance/the200b-200bfundamental200b-200bright200b-200bto200b-200bprivacy-200b-200bpart200b-200biii-scope</link>
    <description>
        &lt;b&gt;This is the third paper in a series on the recent judgment on the right to privacy by the nine judge constitution bench of the Supreme Court in a reference matter in Puttaswamy and others v. Union of India. The first two papers on the Sources and Structure of the constitutional right to privacy are available here, and here, respectively.  While the previous papers dealt with the sources in the Constitution and the interpretive tools used by the bench to locate the right to privacy as a constitutional right, as well as the structure of the right with its various dimensions, this paper will look at the judgment for guidance on principles to determine what the scope of the right of privacy may be.&lt;/b&gt;
        
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/the200b-200bfundamental200b-200bright200b-200bto200b-200bprivacy-200b-200bpart200b-200biii-scope'&gt;https://cis-india.org/internet-governance/the200b-200bfundamental200b-200bright200b-200bto200b-200bprivacy-200b-200bpart200b-200biii-scope&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-10-02T04:14:00Z</dc:date>
   <dc:type>File</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/counter-comments-on-trais-consultation-paper-on-privacy-security-and-ownership-of-data-in-telecom-sector">
    <title>Counter Comments on TRAI's Consultation Paper on Privacy, Security and Ownership of Data in Telecom Sector</title>
    <link>https://cis-india.org/internet-governance/blog/counter-comments-on-trais-consultation-paper-on-privacy-security-and-ownership-of-data-in-telecom-sector</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society (CIS) has commented on the Consultation Paper on Privacy, Security and Ownership of Data in Telecom Sector published by the Telecom Regulatory Authority of India on August 9, 2017.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The submission is divided in three main parts. The first part 'Preliminary' introduces the document. The second part 'About CIS' is an overview of the organization. The third part contains the 'Counter Comments' on the Consultation Paper taking into account the submission made by other stakeholders.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/counter-comments.pdf"&gt;full submission here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/counter-comments-on-trais-consultation-paper-on-privacy-security-and-ownership-of-data-in-telecom-sector'&gt;https://cis-india.org/internet-governance/blog/counter-comments-on-trais-consultation-paper-on-privacy-security-and-ownership-of-data-in-telecom-sector&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-11-23T14:29:06Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/asian-age-amber-sinha-december-3-2017-">
    <title>Breeding misinformation in virtual space</title>
    <link>https://cis-india.org/internet-governance/blog/asian-age-amber-sinha-december-3-2017-</link>
    <description>
        &lt;b&gt;A well-informed citizenry and institutions that provide good information are fundamental to a functional democracy.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The phenomenon of fake news has rece-ived significant sc-holarly and  media attention over the last few years. In March, Sir Tim Berners Lee,  inventor of the World Wide Web, has called for a crackdown on fake news,  stating in an open letter that “misinformation, or fake news, which is  surprising, shocking, or designed to appeal to our biases, can spread  like wildfire.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Gartner, which annually predicts what the next year in technology  will look like, highlighted ‘increased fake news’  as one of its  predictions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The report states that by 2022, “majority of individuals in mature  economies will consume more false information than true information. Due  to its wide popularity and reach, social media has come to play a  central role in the fake news debate.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Researchers have suggested that rumours penetrate deeper within a  social network than outside, indicating the susceptibility of this  medium. Social networks such as Facebook and communities on messaging  services such as Whats-App groups provide the perfect environment for  spreading rumours. Information received via friends tends to be trusted,  and online networks allow in-dividuals to transmit information to many  friends at once.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In order to understand the recent phenomenon of fake news, it is  important to recognise that the problem of misinformation and propaganda  has existed for a long time. The historical examples of fake news go  back centuries where, prior to his coronation as Roman Emperor, Octavian  ran a disinformation campaign against Marcus Antonius to turn the Roman  populace against him.&lt;/p&gt;
&lt;p class="imgCenter" style="text-align: justify; "&gt;&lt;a class="objectNew"&gt;&lt;img alt="aa" src="http://images.asianage.com/images/fdeb4b878fd86fc0af509a2eb0b6927a4c6fdede-tc-img-preview.jpg" title="aa" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The advent of the printing press in the 15th century led to  widespread publication; however, there were no standards of verification  and journalistic ethics. Andrew Pettigrew wri-tes in his The Invention  of News, that news reporting in the 16th and 17th centuries was full of  portents about “comets, celestial apparitions, freaks of nature and  natural disasters.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, the immediate cause for the 1857 War of Indepen-dence was  rumours that the bones of cows and pigs were mixed with flour and used  to grease the cartridges used by the sepoys.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Leading up to the Second World War, the radio emerged as a strong  medium for dissemination of disinformation, used by the Nazis and other  Axis powers. More recently, the milk miracle in the mid-1990s consisting  of stories of the idol of Ganesha drinking milk was a popular fake news  phenomenon. In 2008, rumours about the popular snack, Kurkure, being  made out of plastic became so widespread that Pepsi, its holding  company, had to publicly rebut them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A quick survey by us at the Centre of Internet and Society, for a  forthcoming report, of the different kinds of misinformation being  circulated in India, suggested four different kinds of fake news.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The first is a case of manufactured primary content. This includes  instances where the entire premise on which an argument is based is  patently false. In August 2017, a leading TV channel reported that  electricity had been cut to the Jama Masjid in New Delhi for non-payment  of bills. This was based on a false report carried by a news portal.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second kind of fake news involves manipulation or editing of  primary content so as to misrepresent it as something else. This form of  fake news is often seen with respect to multimedia content such as  images, pictures, audios and videos. These two forms of fake news tend  to originate outside traditional media such as newspapers and television  channels, and can be often sourced back to social media and WhatsApp  forwards.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, we see such unverified stories being picked up by  traditional media. Further, there are instances where genuine content  such as text and pictures are shared with fallacious contexts and  descriptions. Earlier this year, several dailies pointed out that an  image shared by the ministry of home affairs, purportedly of the  floodlit India-Pakistan border, was actually an image of the  Spain-Morocco border. In this case, the image was not doctored but the  accompanying information was false.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Third, more complicated cases of misinformation involve the primary  content itself not being false or manipulated, but the facts when they  are reported may be quoted out of context. Most examples of  misinformation spread by mainstream media, which has more evolved  systems of fact checking and verification, and editorial controls, would  tend to fall under this.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, there are instances of lack of diligence in fully  understanding the issues before reporting. Such misrepresentations are  often encountered while reporting in fields that require specialised  knowledge, such as science and technology, law, finance etc. Such forms  of misinformation, while not suggestive of malafide intent can still  prove to be quite dangerous in shaping erroneous opinions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the widespread dissemination of fake news contributes greatly  to its effectiveness, it also has a lot to do with the manner in which  it is designed to pander to our cognitive biases. Directionally  motivated reasoning prompts people confronted with political information  to process it with an intention to reach a certain pre-decided  conclusion, and not with the intention to assess it in a dispassionate  manner. This further results in greater susceptibility to confirmation  bias, disconfirmation bias and prior attitude effect.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Fake news is also linked to the idea of “naïve realism,” the belief  people have that their perception of reality is the only accurate view,  and those in disagreement are necessarily uninformed, irrational, or  biased. This also explains why so much fake news simply does not engage  with alternative points of view.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A well-informed citizenry and institutions that provide good  information are fundamental to a functional democracy. The use of the  digital medium for fast, unhindered and unchecked spread of information  presents a fertile ground for those seeking to spread misinformation.  How we respond to this issue will be vital for democratic societies in  our immediate future. Fake news presents a complex regulatory challenge  that requires the participation of different stakeholders such as the  content disseminators, platforms, norm guardians which include  institutional fact checkers, trade organisations, and “name-and-shaming”  watchdogs, regulators and consumers.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/asian-age-amber-sinha-december-3-2017-'&gt;https://cis-india.org/internet-governance/blog/asian-age-amber-sinha-december-3-2017-&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-12-08T02:24:29Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/aadhaar-bill-fails-to-incorporate-suggestions-by-the-standing-committee">
    <title>Aadhaar Bill fails to incorporate suggestions by the Standing Committee</title>
    <link>https://cis-india.org/internet-governance/blog/aadhaar-bill-fails-to-incorporate-suggestions-by-the-standing-committee</link>
    <description>
        &lt;b&gt;In 2011, a standing committee report led by Yashwant Sinha had been scathing in its indictments of the Aadhaar BIll introduced by the UPA government. Five years later, the NDA government has introduced a new bill which is a rehash of the same. I look at the concerns raised by the committee report, none of which have been addressed by the new bill.
&lt;/b&gt;
        
&lt;p id="docs-internal-guid-0c1d0148-5959-8221-80f0-984c1f109411" dir="ltr"&gt;The article was published by &lt;a class="external-link" href="http://thewire.in/2016/03/10/aadhaar-bill-fails-to-incorporate-standing-committees-suggestions-24433/"&gt;The Wire&lt;/a&gt;&lt;a class="external-link" href="https://globalvoices.org/2016/02/09/a-good-day-for-the-internet-everywhere-india-bans-differential-data-pricing/"&gt; &lt;/a&gt;on March 10, 2016&lt;/p&gt;
&lt;p dir="ltr"&gt;In December, 2010, the UPA Government introduced the National Identification Authority of India Bill, 2010 in the Parliament. It was subsequently referred to a Standing Committee on Finance by the Speaker of Lok Sabha under Rule 331E of the the Rules of Procedure and Conduct of Business in Lok Sabha. This Committee, headed by BJP leader Yashwant Sinha took evidence from the Minister of Planning and the UIDAI from the government, as well as seeking the view of parties such as the National Human Rights Commission, Indian Banks Association and researchers like Dr Reetika Khera and Dr. Usha Ramanathan. In 2011, having heard from various parties and considering the concerns and apprehensions about the UID scheme, the Committee deemed the bill unacceptable and suggested a re-consideration of the the UID scheme as well as the draft legislation.&lt;/p&gt;
&lt;p dir="ltr"&gt;The Aadhaar programme has so far been implemented under the Unique Identification Authority of India, a Central Government agency created through an executive order. This programme has been shrouded in controversy over issues of privacy and security resulting in a Public Interest Litigation filed by Judge Puttaswamy in the Supreme Court. While the BJP had criticised the project as well as the draft legislation &amp;nbsp;when it was in opposition, once it came to power and particularly, after it launched various welfare schemes like Digital India and Jan Dhan Yojna, it decided to continue with it and use Aadhaar as the identification technology for these projects. In the last year, there have been orders passed by the Supreme Court which prohibited making Aadhaar mandatory for availing services. One of the questions that the government has had to answer both inside and outside the court on the UID project is the lack of a legislative mandate for a project of this size. About five years later, the new BJP led government has come back with a rehash of the same old draft, and no comments made by the standing committee have been taken into account.&lt;/p&gt;
&lt;p dir="ltr"&gt;The Standing Committee on the old bill had taken great exception to the continued collection of data and issuance of Aadhaar numbers, while the Bill was pending in the Parliament. The report said that the implementation of the provisions of the Bill and continuing to incur expenditure from the exchequer was a circumvention of the prerogative powers of the Parliament. However, the project has continued without abeyance since its inception in 2009. I am listing below some of the issues that the Committee identified with the UID project and draft legislation, none of which have been addressed in current Bill.&lt;/p&gt;
&lt;p dir="ltr"&gt;One of the primary arguments made by proponents of Aadhaar has been that it would be useful in providing services to marginalized sections of the society who currently do not have identification cards and consequently, are not able to receive state sponsored services, benefits and subsidies. The report points that the project would not be able to achieve this as no statistical data on the marginalized sections of the society are being used to by UIDAI to provide coverage to them. The introducer systems which was supposed to provide Aadhaar numbers to those without any form of identification, has been used to enroll only 0.03% of the total number of people registered. Further, the &lt;a href="http://uidai.gov.in/UID_PDF/Committees/Biometrics_Standards_Committee_report.pdf"&gt;Biometrics Standards Committee of UIDAI&lt;/a&gt; has itself acknowledged the issues caused due to a high number of manual laborers in India which would lead to sub-optimal fingerprint scans. A &lt;a href="http://www.4gid.com/De-dup-complexity%20unique%20ID%20context.pdf"&gt;report by 4G Identity Solutions&lt;/a&gt; estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population. In this manner, the project could actually end up excluding more people.&lt;/p&gt;
&lt;p dir="ltr"&gt;The Report also pointed to a lack of cost-benefit analysis done before going ahead with scheme of this scale. It makes a reference to the &lt;a href="http://eprints.lse.ac.uk/684/1/identityreport.pdf"&gt;report&lt;/a&gt; by the London School of Economics on the UK Identity Project which was shelved due to a) huge costs involved in the project, b) the complexity of the exercise and unavailability of reliable, safe and tested technology, c) risks to security and safety of registrants, d) security measures at a scale that will result in substantially higher implementation and operational costs and e) extreme dangers to rights of registrants and public interest. The Committee Report insisted that such global experiences remained relevant to the UID project and need to be considered. However, the new Bill has not been drafted with a view to address any of these issues.&lt;/p&gt;
&lt;p dir="ltr"&gt;The Committee comes down heavily on the irregularities in data collection by the UIDAI. They raise doubts about the ability of the Registrars to effectively verify the registrants and a lack of any security audit mechanisms that could identify issues in enrollment. Pointing to the news reports about irregularities in the process being followed by the Registrars appointed by the UIDAI, the Committee deems the MoUs signed between the UIDAI and the Registrars as toothless. The involvement of private parties has been under question already with many questions being raised over the lack of appropriate safeguards in the contracts with the private contractors.&lt;/p&gt;
&lt;span id="docs-internal-guid-0c1d0148-595b-32fa-49d2-8f6a347a4c00"&gt;Perhaps the most significant observation of the Committee was that any scheme that facilitates creation of such a massive database of personal information of the people of the country and its linkage with other databases should be preceded by a comprehensive data protection law. By stating this, the Committee has acknowledged that in the absence of a privacy law which governs the collection, use and storage of the personal data, the UID project will lead to abuse, surveillance and profiling of individuals. It makes a reference to the Privacy Bill which is still at only the draft stage. The current data protection framework in the Section 43A rules under the Information Technology Act, 2000 are woefully inadequate and far too limited in their scope. While there are some protection built into Chapter VI of the new bill, these are nowhere as comprehensive as the ones articulated in the Privacy Bill. Additionally, these protections are subject to broad exceptions which could significantly dilute their impact.&lt;/span&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/aadhaar-bill-fails-to-incorporate-suggestions-by-the-standing-committee'&gt;https://cis-india.org/internet-governance/blog/aadhaar-bill-fails-to-incorporate-suggestions-by-the-standing-committee&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>UID</dc:subject>
    
    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2016-03-10T15:58:57Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept">
    <title>Aadhar: Privacy is not a unidimensional concept</title>
    <link>https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept</link>
    <description>
        &lt;b&gt;Right to privacy is important not only for our negotiations with the information age but also to counter the transgressions of a welfare state. A robust right to privacy is essential for all Indian citizens to defend their individual autonomy in the face of invasive state actions purportedly for the public good.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="http://economictimes.indiatimes.com/news/politics-and-nation/aadhar-privacy-is-not-a-unidimensional-concept/printarticle/59716562.cms"&gt;Economic Times&lt;/a&gt; on July 23, 2017.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The ruling of this nine-judge bench will have far-reaching impact on the extent and scope of rights available to us all. In a disappointing case of judicial evasion by the apex court, it has taken over 600 days since a reference order was passed in August 11, 2015, for this bench to be constituted. Over two days of arguments, the counsels for the petitioners have presented before the court why the right to privacy, despite not finding a mention in the Constitution of India, is a fundamental right essential to a person’s dignity and liberty, and must be read into not one but multiple articles of the Constitution. The government will make its arguments in the coming week.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One must wonder why we are debating the contours of the right to privacy, which 40 years of jurisprudence had lulled us into believing we already had. The answer to that can be found in a series of hearings in the Aadhaar case that began in 2012. Justice KS Puttaswamy, a former Karnataka High Court judge, filed a petition before the Supreme Court, questioning the validity of the Aadhaar project due its lack of legislative basis (since then the Aadhaar Act was passed in 2016) and its transgressions on our fundamental rights. Over time, a number of other petitions also made their way to the apex court, challenging different aspects of the Aadhaar project. Since then, five different interim orders by the Supreme Court have stated that no person should suffer because they do not have an Aadhaar number. Aadhaar, according to the court, could not be made mandatory to avail benefits and services from government schemes. Further, the court has limited the use of Aadhaar to specific schemes: LPG, PDS, MGNREGA, National Social Assistance Programme, the Pradhan Mantri Jan Dhan Yojna and EPFO.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The real spanner in the works in the progress of this case was the stand taken by Mukul Rohatgi, then attorney general of India who, in a hearing before the court in July 2015, stated that there is no constitutionally guaranteed right to privacy. His reliance was on two Supreme Court judgments in MP Sharma v Satish Chandra (1954) and Kharak Singh v State of Uttar Pradesh (1962): both cases, decided by eight- and six-judge benches respectively, denied the existence of a constitutional right to privacy. As the subsequent judgments which upheld the right to privacy were by smaller benches, Rohatgi claimed that MP Sharma and Kharak Singh still prevailed over them, until they were overruled by a larger bench.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The reference to a larger bench has since delayed the entire matter, even as a number of government schemes have made Aadhaar mandatory. This reading of privacy as a unidimensional concept by the courts is, with due respect, erroneous. Privacy, as a concept, includes within its scope, spatial, familial, informational and decisional aspects. We all have a legitimate expectation of privacy in our private spaces, such as our homes, and in our personal relationships. Similarly, we must be able to exercise some control over how personal data, like our financial information, are disseminated. Most importantly, privacy gives us the space to make autonomous choices and decisions without external interference. All these dimensions of privacy must stand as distinct rights. In MP Sharma, the court rejected a certain aspect of the right of privacy by refusing to acknowledge a right against search and seizure. This, in no way prevented the court, even in the form of a smaller bench, from ruling on any other aspects of privacy, including those that are relevant to the Aadhaar case.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The limited referral to this bench means that the court will have to rule on the status of privacy and its possible limitations in isolation, without even going into the details of the Aadhaar case (based on the nature of protection that this bench accords to privacy, the petitioners and defendants in the Aadhaar case will have to argue afresh on whether the project does impede on this most fundamental right). There are no facts of the case to ground the legal principles in, and defining the contours of a right can be a difficult exercise. The court must be wary of how any limits they put on the right may be used in future. Equally, it is important to articulate that any limitations on the right to privacy due to competing interests such as national security and public interest must be imposed only when necessary and always be proportionate.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It will not be enough for the court to merely state that we have a constitutional right to privacy. They would be well advised to cut through the muddle of existing privacy jurisprudence, and unequivocally establish the various facets of the right. Without that, we may not be able to withstand the modern dangers of surveillance, denial of bodily integrity and self-determination through forcible collection of information. The nine judges, in their collective wisdom, must not only ensure that we have a right to privacy, but also clearly articulate a robust reading of this right capable of withstanding the growing interferences with our autonomy.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept'&gt;https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-08-23T01:50:19Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-statement-on-right-to-privacy-judgment">
    <title>CIS Statement on Right to Privacy Judgment</title>
    <link>https://cis-india.org/internet-governance/blog/cis-statement-on-right-to-privacy-judgment</link>
    <description>
        &lt;b&gt;In an emphatic endorsement of the right to privacy, a nine judge constitutional bench unanimously upheld a fundamental right to privacy. The events leading to this bench began during the hearings in the ongoing Aadhaar case, when in August 2015, Mukul Rohatgi, the then Attorney General stated that there is no constitutionally guaranteed right to privacy.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;reliance was on two Supreme Court judgments in MP Sharma v Satish Chandra (1954) and Kharak Singh v State of Uttar Pradesh (1962): both cases, decided by eight- and six-judge benches respectively, denied the existence of a constitutional right to privacy. As the subsequent judgments which upheld the right to privacy were by smaller benches, he claimed that MP Sharma and Kharak Singh still prevailed over them, until they were overruled by a larger bench. This landmark judgment was in response to a referral order to clear the confusion over the status of privacy as a right.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We, at the Centre for Internet and Society (CIS) welcome this judgement and applaud the depth and scope of the Supreme Court’s reasoning. CIS has been producing research on the different aspects of the right to privacy and its implications for the last seven years and had the privilege of serving on the Justice AP Shah Committee and contributing to the Report of the Group of Experts on Privacy.&lt;a name="fr1" href="#fn1"&gt;[1]&lt;/a&gt; We are honoured that some of our research has also been cited by the judgment.&lt;a name="fr2" href="#fn2"&gt;[2] &lt;/a&gt;Such judicial recognition is evidence of the impact sound research can have on policymaking.&lt;/p&gt;
&lt;p style="text-align: justify;" class="normal"&gt;In the course of a 547 page judgment, the bench affirmed the fundamental nature of the right to privacy reading it into the values of dignity and liberty. The judgment is instructive in its reference to scholarly works and jurisprudence not only in India but other legal systems such as USA, South Africa, EU and UK, while recognising a broad right to privacy with various dimensions across spatial, informational and decisional spheres. We note with special appreciation that women’s bodily integrity and citizens’ sexual orientation are among those aspects of privacy that were clearly recognised in the judgment. For researchers studying privacy and its importance, this judgment is of great value as it provides clear reasoning to reject oft-quoted arguments which are used to deny privacy’s significance. The judgement is also cognizant of the implications of the digital age and emphasise the need for a robust data protection framework.&lt;/p&gt;
&lt;p style="text-align: justify;" class="normal"&gt;The right to privacy has been read into into Article 21 (Right to life and liberty), and Part III (Chapter on Fundamental Rights) of the Constitution. This means that any limitation on the right in the form of reasonable restrictions must not only satisfy the tests evolved under Article 21, but where loss of privacy leads to infringement on other rights, such as chilling effects of surveillance on free speech, the tests for constitutionality under those provisions for also be satisfied by the limiting action. This provides a broad protection to citizens’ privacy which may not be easily restricted. We expect that this judgment will have far reaching impacts, not just with respect to the immediate Aadhaar case, but also to in a score of other matters such as protection of sexual choice by decriminalising Section 377 of the Indian Penal Code, oversight of statutory search and seizure provisions such as Section 132 of the Income Tax Act, personal data collection and processing practices by both state and private actors and mass surveillance programmes in the interest of national security.&lt;/p&gt;
&lt;p style="text-align: justify;" class="normal"&gt;As this judgment comes in response to a referral order, the judges were not dealing with any questions of fact to ground the legal principles in. Subsequent judgments which deal with privacy will apply these principles and further evolve the contours of this right on a case-by-case basis. For now, we welcome this judgment and look forward to its consistent application in the future.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;[&lt;a name="fn1" href="#fr1"&gt;1&lt;/a&gt;]. http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify;" class="normal"&gt;[&lt;a name="fn2" href="#fr2"&gt;2&lt;/a&gt;]. CIS was quoted in the judgement on footnote 46, page 33 and 34: &lt;a href="http://supremecourtofindia.nic.in/pdf/LU/ALL%20WP(C)%20No.494%20of%202012%20Right%20to%20Privacy.pdf"&gt;http://supremecourtofindia.nic.in/pdf/LU/ALL%20WP(C)%20No.494%20of%202012%20Right%20to%20Privacy.pdf &lt;/a&gt;The quote is " Illustratively, the Centre for Internet and Society has two interesting articles tracing the origin of privacy within Classical Hindu Law and Islamic Law. See Ashna Ashesh and Bhairav Acharya ,“Locating Constructs of Privacy within Classical Hindu Law”, The Centre for Internet and Society, available at &lt;a href="https://cis-india.org/internet-"&gt;https://cis-india.org/internet-&lt;/a&gt;governance/blog/loading-constructs-of-privacy-within-classical-hindu-law. See also Vidushi Marda and Bhairav Acharya, “Identifying Aspects of Privacy in Islamic Law”, The Centre for Internet and Society, available at &lt;a href="https://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law"&gt;https://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law&lt;/a&gt; " Further, research commissioned by CIS cited in the judgment includes a reference in page 201 footnote 319, "Bhairav Acharya, “The Four Parts of Privacy in India”, Economic &amp;amp; Political Weekly (2015), Vol. 50 Issue 22, at page 32."&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-statement-on-right-to-privacy-judgment'&gt;https://cis-india.org/internet-governance/blog/cis-statement-on-right-to-privacy-judgment&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-08-31T18:13:14Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
