<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 201 to 215.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/new-indian-express-april-26-2018-aadhaar-data-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/pai-wg-labor-and-economy-meeting"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/asia-times-april-20-2018-aayush-rathi-sunil-abraham-what-s-up-with-whatsapp"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/economic-times-march-30-2018-your-mobile-apps-have-the-permission-to-spy-on-you"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/the-hindu-march-31-2018-saurya-sengupta-if-data-is-new-oil-how-much-an-indian-citizen-lose"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/financial-times-march-28-2018-narendra-modi-personal-app-sparks-india-data-privacy-row"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/business-standard-march-28-2018-sunil-abraham-cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/business-standard-mayank-jain-march-27-2018-uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/economic-times-march-26-2018-nilesh-christopher-security-experts-say-need-to-secure-aadhaar-ecosystem-warn-about-third-party-leaks"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/first-post-march-26-2018-indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/news/new-indian-express-april-26-2018-aadhaar-data-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online">
    <title>Aadhaar data of over 89 lakh MNREGA workers in Andhra Pradesh leaked online</title>
    <link>https://cis-india.org/internet-governance/news/new-indian-express-april-26-2018-aadhaar-data-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online</link>
    <description>
        &lt;b&gt;Independent security researcher Kodali Srinivas tweeted screenshots of Aadhaar data of 89,38,138 MNREGA workers available on the Andhra Pradesh Benefit Disbursement Portal.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="http://www.newindianexpress.com/states/andhra-pradesh/2018/apr/26/aadhaar-data-of-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online-1806717.html"&gt;published in New Indian Express&lt;/a&gt; on April 27, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Independent security researcher Kodali Srinivas, who exposed the leakage of Aadhaar and other personal data of 1.34 lakh beneficiaries on the State Housing Corporation website, on Thursday tweeted screenshots of Aadhaar data of 89,38,138 MNREGA workers availalbe on the Andhra Pradesh Benefit Disbursement Portal, which is maintained by APOnline, a joint venture between the Tata Consultancy Services (TCS) and the State government.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Hours after he blew the whistle, the website administrators began masking the data. In May 2017, Srinivas had co-authored a report for the Centre for Internet and Society, exposing how the Aadhaar data of 13.5 crore card holders was leaked online. The data was then leaked by four government portals, National Social Assistance Programme, National Rural Employment Guarantee Scheme, Chandranna Bima Scheme of the Government of Andhra Pradesh and Daily Online Payment Reports of NREGA of the Government of Andhra Pradesh.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It appears that almost a year later, nothing much has changed. Srinivas told TNIE he had sent a mail to the chief operating officer, APOnline and Universal Identification Authority of India, the National Critical Information Infrastructure Protection Centre, and CERT-In, the Centre's cyber response wing. When contacted, Balasubramanyam, Joint Secretary (NREGS) told TNIE, "I have seen it. It is Benefit Disbursement Portal... not maintained by us. We have been very careful ever since that massive leak of data last year."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Executive (operations), APOnline, S Chandramouleeswara Reddy refused comment saying that he was not the competent authority to speak on the issue. APOnline developed ICT solution for MGNREGA scheme, a framework involving Department of Posts, for disbursement of entitlements after accurate authentication of the entitlements through finger print authentication. TCS implements the ICT solution for MGNREGA in the State.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/new-indian-express-april-26-2018-aadhaar-data-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online'&gt;https://cis-india.org/internet-governance/news/new-indian-express-april-26-2018-aadhaar-data-over-89-lakh-mnrega-workers-in-andhra-pradesh-leaked-online&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-05T08:43:53Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/pai-wg-labor-and-economy-meeting">
    <title>PAI WG Labor and Economy Meeting</title>
    <link>https://cis-india.org/internet-governance/news/pai-wg-labor-and-economy-meeting</link>
    <description>
        &lt;b&gt;Elonnai Hickok co-chaired the first PAI Labor and Economy WG in NYC on April 25, 2018.&lt;/b&gt;
        &lt;p&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/pai-wg-labor-and-economy"&gt;Agenda&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/pai-wg-labor-and-economy-meeting'&gt;https://cis-india.org/internet-governance/news/pai-wg-labor-and-economy-meeting&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-05T09:35:07Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/asia-times-april-20-2018-aayush-rathi-sunil-abraham-what-s-up-with-whatsapp">
    <title>What’s up with WhatsApp?</title>
    <link>https://cis-india.org/internet-governance/blog/asia-times-april-20-2018-aayush-rathi-sunil-abraham-what-s-up-with-whatsapp</link>
    <description>
        &lt;b&gt;In 2016, WhatsApp Inc announced it was rolling out end-to-end encryption, but is the company doing what it claims to be doing?&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Aayush Rathi and Sunil Abraham was published in &lt;a class="external-link" href="http://www.atimes.com/article/whats-up-with-whatsapp/"&gt;Asia Times&lt;/a&gt; on April 20, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Back in April 2016, when WhatsApp Inc announced it was rolling out end-to-end encryption (E2EE) for its billion-plus strong user base as a default setting, the messaging behemoth signaled to its users it was at the forefront of providing technological solutions to protect privacy.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Emphasized in the security white paper explaining the implementation of the technology is the encryption of both forms of communication – one-to-one and group and also of all types of messages shared within such communications – text as well as media.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Simply put, all communication taking place over WhatsApp would be decipherable only to the sender and recipient – it would be virtual gibberish even to WhatsApp.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;This announcement came in the backdrop of &lt;a href="https://www.theguardian.com/us-news/2016/feb/17/apple-ordered-to-hack-iphone-of-san-bernardino-shooter-for-fbi"&gt;Apple locking horns with the FBI&lt;/a&gt; after being asked to provide a backdoor to unlock the San Bernardino mass shooter’s iPhone. This further reinforced WhatsApp Inc’s stand on the ensuing debate between the interplay of privacy and security in the digital age.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Kudos to WhatsApp, for there is &lt;a href="http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/CallForSubmission.aspx"&gt;growing discussion&lt;/a&gt; around how encryption and anonymity is central to enabling secure online communication which in turn is integral to essential human rights such as those of freedom of opinion and expression.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;WhatsApp may have taken encryption to the masses, but here we outline why WhatsApp’s provisioning of privacy and security measures needs a more granular analysis – is the company doing what it claims to be doing? Security issues with WhatsApp’s messaging protocol certainly are not new.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Man-in-the-middle attacks&lt;/h3&gt;
&lt;p class="p4" style="text-align: justify; "&gt;A &lt;a href="https://eprint.iacr.org/2017/713.pdf"&gt;study&lt;/a&gt; published by a group of German researchers from Ruhr University highlighted issues with WhatsApp’s implementation of its E2EE protocol to group communications. Another &lt;a href="https://courses.csail.mit.edu/6.857/2016/files/36.pdf"&gt;paper&lt;/a&gt; points out how WhatsApp’s session establishment strategy itself could be problematic and potentially be targeted for what are called man-in-the-middle (MITM) attacks.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;An MITM attack takes the form of a malicious actor, as the term suggests, placing itself between the communicating parties to eavesdrop or impersonate. The Electronic Frontier Foundation also &lt;a href="https://www.eff.org/deeplinks/2016/10/where-whatsapp-went-wrong-effs-four-biggest-security-concerns"&gt;highlighted&lt;/a&gt; other security vulnerabilities, or trade-offs, depending upon ideological inclinations, with respect to WhatsApp allowing for storage of unencrypted backups, issues with WhatsApp’s web client and also with its approach to cryptographic key change notifications.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Much has been written questioning WhatsApp’s shifting approach to ensuring privacy too. Quoting straight from &lt;a href="https://www.whatsapp.com/legal/#privacy-policy-affiliated-companies"&gt;WhatsApp’s Privacy Policy:&lt;/a&gt; “We joined the Facebook family of companies in 2014. As part of the Facebook family of companies, WhatsApp receives information from, and shares information with, this family of companies.” Speaking of Facebook …&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Culling out larger issues with WhatsApp’s privacy policies is not the intention here. What we specifically seek to explore is right at the nexus of WhatsApp’s security and privacy provisioning clashing with its marketing strategy: the storage of data on WhatsApp’s servers, or ‘blobs,’ as they are referred to in the technical paper. Facebook’s rather. In WhatsApp’s words: “Once your messages (including your chats, photos, videos, voice messages, files and share location information) are delivered, they are deleted from our servers. Your messages are stored on your own device.”&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;In fact, this non-storage of data on their ‘blobs’ is emphasizes at several other points on the official website. Let us call this the deletion-upon-delivery model.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;A simple experiment&lt;/h3&gt;
&lt;p class="p4" style="text-align: justify; "&gt;While drawing up a rigorous proof of concept, made near-impossible thanks to WhatsApp being a closed source messaging protocol, a simple experiment is enough to raise some very pertinent questions about WhatsApp’s outlined deletion-upon-delivery model. It should, however, be mentioned that the Signal Protocol developed by Open Whisper Systems and pivotal in WhatsApp’s rolling out of E2EE is &lt;a href="https://github.com/signalapp"&gt;open source&lt;/a&gt;. Here is how the experiment proceeds:&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Rick sends Morty an attachment.&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Morty then switches off the data on her mobile device.&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Rick downloads the attachment, an image.&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Subsequently, Rick deletes the image from his mobile device’s internal storage.&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Rick then logs into a WhatsApp’s web client on his browser. (Prior to this experiment, both Rick and Morty had logged out from all instances of the web client)&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;&lt;i&gt;Upon a fresh log-in to the web client and opening the chat with Morty, the option to download the image is available to Rick.&lt;/i&gt;&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;The experiment concludes with bewilderment at WhatsApp’s claim of deletion-upon-delivery as outlined earlier. The only place from which Morty could have downloaded the image would be from Facebook’s ‘blobs.’ The attachment could not have been retrieved from Morty’s mobile device as it had no way of sending data and neither from Rick’s mobile device as it no longer existed in the device’s storage.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;As per the Privacy Policy, the data is stored on the ‘blobs’ for a period of 30 days after transmission of a message only when it can’t be delivered to the recipient. Upon delivery, the deletion-upon-delivery model is supposed to kick in.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Another straightforward experiment that leads to a similar conclusion is seeing the difference in time taken for a large attachment to be forwarded as opposed to when the same large attachment is uploaded. Forwarding is palpably quicker than uploading afresh: non-storage of attachments on the ‘blob’ would entail that the same amount should be taken for both.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;The plot thickens. WhatsApp’s Privacy Policy goes on to state: “To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, we may retain that content on our servers for a longer period of time.”  The technical paper offers no help in understanding how WhatsApp systems assess frequently shared encrypted media messages without decrypting it at its end.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;A possible explanation could be the usage of metadata by WhatsApp, which it discloses in its Privacy Policy while simultaneously being sufficiently vague about the specifics of it. That WhatsApp may be capable of reading encrypted communication through the inclusion of a backdoor bodes well for law enforcement, but not so much for unsuspecting users.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;The weakest link in the chain&lt;/h3&gt;
&lt;p class="p4" style="text-align: justify; "&gt;Concerns about backdoors in WhatsApp’s product have led the French government to start developing their &lt;a href="https://www.reuters.com/article/us-france-privacy/france-builds-whatsapp-rival-due-to-surveillance-risk-idUSKBN1HN258"&gt;own encrypted messaging service&lt;/a&gt;. This will be built using Matrix – an open protocol designed for real-time communication. Indeed, the Privacy Policy lays out that the company “may collect, use, preserve, and share your information if we have a good-faith belief that it is reasonably necessary to respond pursuant to applicable law or regulations, to legal process, or to government requests.”&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;The Signal Protocol is the undisputed gold standard of E2EE implementations. It is the integration with the surrounding functionality that WhatsApp offers which leads to vulnerabilities. After all, a chain is only as strong as its weakest link. Assuming that the attachments stored on the ‘blobs’ are in encrypted form, indecipherable to all but the intended recipients, this does not pose a privacy risk for the users from a technological point of view.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;However, it is easy lose sight of the fact that the Privacy Policy is a legally binding document and it specifically states that messages are not stored on the ‘blobs’ as a matter of routine. As a side note, WhatsApp’s Privacy Policy and Terms of Service are refreshing in their readability and lack of legalese.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;As we were putting the final touches to this piece, &lt;a href="https://wabetainfo.com/whatsapp-allows-to-redownload-deleted-media/#more-2781"&gt;news from &lt;i&gt;WABetaInfo&lt;/i&gt;&lt;/a&gt;, a well-reputed source of information on WhatsApp features, has broken that newer updates of WhatsApp for Android are permitting users to re-download media deleted up to three months back. WhatsApp cannot possibly achieve this without storing the media in the ‘blobs,’ or in other words, in violation of its Privacy Policy.&lt;/p&gt;
&lt;p class="p4" style="text-align: justify; "&gt;As the aphorism goes: “When the service is free, you are the product.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/asia-times-april-20-2018-aayush-rathi-sunil-abraham-what-s-up-with-whatsapp'&gt;https://cis-india.org/internet-governance/blog/asia-times-april-20-2018-aayush-rathi-sunil-abraham-what-s-up-with-whatsapp&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Aayush Rathi and Sunil Abraham</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>WhatsApp</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    

   <dc:date>2018-04-23T16:45:51Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi">
    <title>Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</link>
    <description>
        &lt;b&gt;This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.&lt;/b&gt;
        &lt;p&gt;&lt;span&gt;Event Report: &lt;/span&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance"&gt;Download&lt;/a&gt;&lt;span&gt; (PDF)&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation  from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Meaning and Scope of AI&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;AI Sectoral Applications&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Human Involvement with Automated Decision Making&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;The Social and Power Relations Surrounding AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Regulatory Approaches to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Challenges to Adopting AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p id="_mcePaste"&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;div&gt;&lt;/div&gt;
&lt;p style="text-align: justify; "&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency  and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;[1]. Automated decision making model where final decisions are made by a human operator&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[3]. A completely autonomous decision making model requiring no human involvement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[4]. https://futureoflife.org/ai-principles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Saman Goudarzi and Natallia Khaniejo</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-03T15:49:40Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook">
    <title>Government gives free publicity worth 40k to Twitter and Facebook </title>
    <link>https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook</link>
    <description>
        &lt;b&gt;We conducted a 2 week survey of newspapers for links between government advertisement to social media giants. As citizens, we should be worried about the close nexus between the Indian government and digital behemoths such as Facebook, Google and Twitter. It has become apparent to us after a 2 week print media analysis that our Government has been providing free publicity worth Rs 40,000 to these entities. There are multiple issues with this as this article attempts at pointing out.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;&lt;img src="https://cis-india.org/home-images/TotalAdvertisementExpenditure.jpg" alt="null" class="image-inline" title="Total Advertisement Expenditure" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We analyzed 5 English language newspapers daily for 2 weeks from March 12&lt;sup&gt;th&lt;/sup&gt; to 26&lt;sup&gt;th&lt;/sup&gt;, one week of the newspapers in Lucknow and the second week in Bangalore. Facebook, Twitter, Instagram and Alphabet backed services such as Youtube and Google Plus were part of our survey. Of a total of 33 advertisements (14 in Lucknow+19 in Bangalore), Twitter stands out as the most prominent advertising platform used by government agencies with 30 ads but Facebook at 29 was more expensive. In order to ascertain the rates of publicity, current advertisement rates for Times of India as our purpose was to solely give a rough estimation of how much the government is spending.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Advertising of this nature is not merely an inherent problem of favoring some social media companies over others but also symptomatic of a bigger problem, the lack of our native e-governance mechanisms which cause the Government to rely and promote others. Where we do have guidelines they are not being followed. By outsourcing their e-governance platforms to Twitter such as TwitterSeva, a feature created by the Twitter India team to help citizens connect better with government services, there is less of an impetus to construct better &lt;a class="external-link" href="https://factordaily.com/twitter-helping-india-reboot-public-services-publicly/"&gt;websites of their own&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;If this is so because we currently do not have the capacity to build them ourselves then it is imperative that this changes. We should either be executing government functions on digital infrastructure owned by them or on open and interoperable systems. If anything, the surveyed social media platforms can be used to enhance pre-existing facilities. However, currently the converse is true with these platforms overshadowing the presence of e-governance websites. Officials have started responding to complaints on Twitter, diluting the significance of such complaint mechanisms on their respective department’s portal. Often enough such features are not available on the relevant government website. This sets a dangerous precedent for a citizen management system as the records of such interactions are then in the hands of these companies who may not exist in the future. As a result, they can control the access to such records or worse tamper with them. Posterity and reliability of such data can be ensured only if they are stored within the Government’s reach or if they are open and public with a first copy stored on Government records which ensures transparency as well. Data portability is an important facet to this issue as well as being a right consumers should possess. It provides for support of many devices, transition to alternative technologies and lastly, makes sure that all the data like other public records will be available upon request through the Right to Information procedure. The last is vital to uphold the spirit of transparency envisioned through the RTI process since interactions of government with citizens are then under its ambit and available for disclosure for whomsoever concerned.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Secondly, such practices by the Government are enhancing the monopoly of the companies in the market effectively discouraging competition and eventually, innovation. While a certain elite strata of the population might opt for Twitter or Facebook as their mode of conveying grievance, this may not hold true for the rest of the online India population.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Picking players in a free market is in violation of technology and vendor neutrality, a practice essential in e-governance to provide a level playing field for all and competing technologies. Projecting only a few platforms as de facto mediums of communication with the government inhibits the freedom of choice of citizens to air their grievances through a vendor or technology they are comfortable with. At the same time it makes the Government a mouthpiece for such companies who are gaining free publicity and consolidating their popularity. Government apps such as the SwachBharat one which is an e-governance platform do not offer much more in terms of functionality but either reflect the website or are a less mature version of the same. This leads to the problem of fracturing with many avenues of complaining such as the website, app, Twitter etc. Consequently, the priority of the people dealing with the complaints in terms of platform of response is unsure. Will I be responded to sooner if I tweet a complaint as opposed to putting it up on the app? Having an interoperable system can solve this where the Government can have a dashboard of their various complaints and responses are then made out evenly. Twitter itself could implement this by having complaints from Facebook for example and then the Twitter Seva would be an equal platform as opposed to the current issue where only they are favored.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Recent events have illustrated how detrimental the storage of data by these giants can be in terms of privacy. Data security concerns are also a consequence of such leaks. Not only is this a long overdue call for a better data protection law but at the same time also for the Government to realize that these platforms cannot be trusted. The hiring of Cambridge Analytica to influence voters in the US elections, based on their Facebook profiles and ancillary data, effectively put the governance of the country on sale by exploiting these privacy and security issues. By basing e-governance on their backbone, India is not far from inviting trouble as well. It is unnecessary and dangerous to have a go-between for matters that pertain between an individual and state.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;As this article was being written, it was confirmed by the Election Commission that they are partnering with Facebook for the Karnataka Assemby Elections to promote activities such as encourage enrollment of Voter ID and voter participation. Initiatives like these tying the government even closer to these companies are of concern and cementing the latter’s stronghold.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;em&gt;Note: Our survey data and results are attached to this post. All research was collected by Shradha Nigam, a Vth year student at NLSIU, Bangalore.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 style="text-align: justify;"&gt;Survey Data and Results&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;This report is based on a survey of government advertisements in English language newspapers in relation to their use of social media platforms and dedicated websites (“&lt;strong&gt;Survey&lt;/strong&gt;”). For the purpose of this report, the ambit of the social media platforms has been limited to the use of Facebook, Twitter, YouTube, Google Plus and Instagram. The report was prepared by Shradha Nigam, a student from National Law School of India University, Bangalore. &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/cis-report-on-social-media"&gt;Read the full report here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook'&gt;https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Akriti Bopanna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Google</dc:subject>
    
    
        <dc:subject>Instagram</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Twitter</dc:subject>
    
    
        <dc:subject>YouTube</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Google Plus</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    

   <dc:date>2018-04-27T09:52:26Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook">
    <title>Digital Native: Delete Facebook?</title>
    <link>https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook</link>
    <description>
        &lt;b&gt;You can check out any time you like, but you can never leave.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="http://indianexpress.com/article/technology/social/digital-native-delete-facebook-5127198/"&gt;published in Indian Express&lt;/a&gt; on April 8, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One fine day, we all woke up and were told that &lt;/span&gt;&lt;a href="http://indianexpress.com/about/facebook/"&gt;Facebook&lt;/a&gt;&lt;span&gt; sold our data to Cambridge Analytica and then they made dastardly profiles of us to target us with advertisement and political propaganda, so, we made a beeline for #DeleteFacebook. The most surprising part about the expose is how much of a non-event it is. We have been warned, at least since the Edward Snowden revelations, if not earlier, that our data is the new oil, coal and gold. It is being used as a resource, it is being mined from our everyday digital transactions, and it is precious because it can result in a massive social engineering without our consent or knowledge. Ever since Facebook started expanding its domain from being a friends-poke-friends-with-livestock website, we have been warned that the ambition of Facebook was never to connect you with your friends but to be your friend.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;Time and again, we have been told that the sapient Facebook algorithm remembers everything you say and do, anticipates all your future needs, and listens to the most banal litany of your life. More than your mom, your partner or your shrink, it’s the Facebook algorithm which is interested in all your quotidian uselessness. It is not the stranger who accesses your post that should worry you. The biggest perpetrator of privacy violations on Facebook is Facebook itself. There is good reason why a company that offers its prime products for free is valuated as one of the richest corporations in the world. The product of Facebook – it has always been known – is us.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;Why, then, are we suddenly taken aback at the fact that Facebook sold us? And while we are sharing our thoughts (ironically on Facebook) about deleting our profiles, the question that remains is this: How much of your digital life are you willing to erase? Because, and I am sorry if this pricks your filter bubble, Facebook’s problem is not really a Facebook problem. It is almost the entire World Wide Web, where we lost the battle for data ownership and platform openness more than two decades ago. Name one privately owned free service that you use on the internet and I will show you the section in its “terms and services” where you have surrendered your data. In fact, you can’t even find government services, tied up with their private partners, where your data is safe and stored in privacy vaults where it won’t be abused.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;It is time to realise that the popular ’90s meme “All your base are belong to us” is the lived reality of our digital lives. As we forego ownership for convenience, as our governments sold our sovereignty for profits, and as digital corporations became behemoths that now have the capacity to challenge and write our constitutional and fundamental rights, we are waking up to a battle that has already been fought and resolved. A large part of our physical hardware to access the internet is privately owned. This means that almost all our PCs, tablets, phones, servers are owned and open to exploitation by private companies. Every time your phone does an automatic update or your PC goes into house-cleaning mode, you have to realise that you are being stored, somewhere in the cloud in ways that you cannot imagine.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;It is tiring to hear this alarm and panic around Facebook’s data trading. Not only is it legal, it is something that has been happening for a while, most of us have been aware of it, and we have resolutely ignored it because, you know, cute cats. If somebody tells you that they are against privately owned physical property and are going to start a revolution to take away all private property and make it equally shared with the public, you would laugh at them because they are arriving at the battle scene after the war is over. This digital wokeness trend to #DeleteFacebook is the digital equivalent of that moment. If you want to fight, fight the governments and nations who can still protect us. Participate in conversations around Internet governance. Take responsibility to educate yourself about the politics of how the digital world operates. But stop trying to feel virtuous because you pulled out of a social media network, pretending that that is the end of the problem.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook'&gt;https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>nishant</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    

   <dc:date>2018-05-06T03:08:25Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk">
    <title>It Took Just 355 Indians to Mine the Data of 5.6 Lakh Facebook Users. Here's How</title>
    <link>https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk</link>
    <description>
        &lt;b&gt;Data privacy in India is still a nascent subject. Experts say cheap data has led to unprecedented Facebook penetration. Often, it is seen that those who open an account are not aware of the privacy concerns.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Subhajit Sengupta was published in &lt;a class="external-link" href="https://www.news18.com/news/india/how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk-1710845.html"&gt;CNN-News 18&lt;/a&gt; on April 7, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Over 5.6 lakh Indian Facebook profiles have allegedly been compromised and their data leaked to the controversial data analytics firm Cambridge Analytica. As per the company, only 335 people in India installed the App yet they managed to penetrate over half a million profiles. &lt;br /&gt;&lt;br /&gt;So, how does this work?&lt;br /&gt;&lt;br /&gt;Once a user downloaded the quiz app called “thisisyourdigitallife”, Global Science Research Limited got access to the entire treasure trove of data. There are two mechanisms which are used for this.&lt;br /&gt;&lt;br /&gt;First, the Application Program Interface (API) of Facebook called ‘Social Graph’ allows any app to harvest the entire contact list and everything else that could be seen on a users’ friend’s profile. This would take place even for private profiles, says Sunil Abraham, Executive Director of Bangalore based research organization ‘Centre for Internet and Society’.&lt;br /&gt;&lt;br /&gt;The second way is when users have a public profile. The algorithm seeks out public profiles from the friend list and would go on multiplying from one public profile to another without any of the users even coming to know what is happening. This is like the ‘True Caller’ application, for it to get your number, you don’t need to download the software. If anyone has the app and your number, then it gets automatically logged there.&lt;br /&gt;&lt;br /&gt;Facebook says "Cambridge Analytica’s acquisition of Facebook data through the app developed by Dr Aleksandr Kogan and his company Global Science Research Limited (GSR) happened without our authorisation and was an explicit violation of our Platform policies." &lt;br /&gt;&lt;br /&gt;GSR continued to access this data from all the Facebook profiles throughout the entire lifespan of the app on the Facebook platform, which was roughly two years between 2013 and 2015. This means, even if a user is careful enough to not download the application but his/her profile’s privacy settings are weak, the algorithm would infiltrate the data bank.&lt;br /&gt;&lt;br /&gt;Amit Dubey, a Cyber Security Expert goes into the details of what the app did, “The app called 'thisisyourdigitallife', which was created for research work by Aleksandr Kogan, was eventually used for psychometric profiling of users and then manipulating their political biases. The app was offered to users on the pretext to take a personality test and it agreed to have their data collected for academic use only. But the app has exploited a security vulnerability of Facebook application.”&lt;br /&gt;&lt;br /&gt;Facebook “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it from being sold or used for advertising. &lt;br /&gt;&lt;br /&gt;But this kind of data scrapping is not just limited to Cambridge Analytica. The Social Media Algorithm is often abused in the world of data scavenging and analytics. Even law enforcement agencies have often used similar means to locate possible miscreants. &lt;br /&gt;&lt;br /&gt;According to Shesh Sarangdhar, Chief Executive Officer in Seclabs &amp;amp; Systems Pvt Ltd, similar data scrapping helped them unearth the terror module behind one of the attacks at an airbase last year. Shesh said that through Social Media Algorithm they would often narrow down on unknown terror modules. What his team did was to connect to the profile the whereabouts of multiple known nods converging. That is how the mastermind was located.&lt;br /&gt;&lt;br /&gt;Data privacy in India is still a nascent subject. Experts say cheap data has led to unprecedented Facebook penetration. &lt;br /&gt;&lt;br /&gt;Often, it is seen that those who open an account are not aware of the privacy concerns. But as Sunil Abraham puts it, Caveat emptor or ‘Let the Buyers Beware’ does not even apply here. It is not possible for anyone to go through the entire privacy policy. &lt;br /&gt;&lt;br /&gt;“So it is not even right to ask if the consumer can protect his/her own interest. Thus, the state should proactively regulate the industry,” said Abraham.&lt;br /&gt;&lt;br /&gt;Facebook has brought in a number of changes to its privacy settings. It now allows you to remove third-party apps in bulk. This welcome change has come after sustained pressure on the tech giant from users and a number of regulatory bodies across the world.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk'&gt;https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-07T15:33:46Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access">
    <title>After data leak row, Facebook imposes restrictions on user data access</title>
    <link>https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access</link>
    <description>
        &lt;b&gt;MEIT issues notice to Facebook even as experts debate absolute impact on the second largest developer community.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Romita Majumdar and Kiran Rathee was published in &lt;a class="external-link" href="http://www.business-standard.com/article/current-affairs/after-data-leak-row-facebook-imposes-restrictions-on-user-data-access-118040500950_1.html"&gt;Business Standard&lt;/a&gt; on April 6, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Social media giant &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has finally reacted to the global storm around its data privacy policies by bringing in a new set of restrictions on developers and data aggregators using the platform for data harvesting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Two weeks ago we promised to take a hard look at the information apps can use when you connect them to &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;as well as other data practices. We will remove a developer’s ability to request data people shared with them if it appears they have not used the app in the last 3 months,” said &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;Chief Technology Officer &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=mark+schroepfer" target="_blank"&gt;Mark Schroepfer &lt;/a&gt;in a blog.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;iframe frameborder="0" height="1" marginheight="0" marginwidth="0" scrolling="no" title="3rd party ad content" width="1"&gt;&lt;/iframe&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;&lt;span&gt;has also disabled the feature to search a user by their email address or phone number which has been abused by malicious actors and reduced the overall control that the app will have on user data.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has also submitted its response to the Indian government saying over 500,000 people in India have been potentially affected by the data breach involving &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica.&lt;/a&gt; The government sources said as the social networking firm has now accepted that Indians’ data was compromised; it makes the issue much more important and serious. “We will wait for Cambridge Analytica’s reply and then, we will take our stand,” sources in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=electronics" target="_blank"&gt;Electronics &lt;/a&gt;and IT Ministry said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Ministry had issued notices to both &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;and Cambridge Analytica, seeking their responses regarding the data breach of Indians and if it was used to influence elections.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The new set of restrictions clamp down on how much data app developers access on the platform and also prevent third part data providers from offering targeted marketing services on &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"India is the second largest &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;developer base and the restriction on users' data access is going to impact all of them. There will be more scrutiny in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;apps, leading to slower approvals. Virality will reduce as explicit consent will be required for accessing friends' data and contacts list, “ said Vivek Prakash, CTO and Co-Founder, HackerEarth.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He added that there could be tighter terms of service making developers also liable for unauthorized processing of data that they collect from the apps.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Executive Director of Center for Internet and Society Sunil Abraham says that while &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;says “apps need to agree to strict requirements” and “tightening our review process” it is still not clear what these requirements are. “Instead of the promised link to whether user data was accessed by Cambridge Analytica, it would make sense for them to say &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;holds W number of records across X databases over the time period Y, which totals Z Gb while explaining what these variables stand for,” he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Consumer data marketing company Hansa Cequity believes that digital marketing arms of most companies will finally have to consider building their own user database given the strict clampdown on third party data.“Businesses can no more use data from third party aggregators for targeted advertising. Consumer goods and entertainment related brands are likely to face some impact because they depend on access to such data,” said S Swaminathan, Co-Founder and CEO, Hansa Cequity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some experts also believe that this move might force platforms like Twitter, &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=google" target="_blank"&gt;Google &lt;/a&gt;and &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=youtube" target="_blank"&gt;YouTube &lt;/a&gt;to rethink their policies on how much access they give advertisers and data aggregators to user data. Abraham also added that app developers and their investors have to evaluate business models that depend more on value to user rather than the amount of personal data harvested. The data that has already been harvested by the likes of &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica &lt;/a&gt;and other unknown parties, however, is beyond user control forever.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access'&gt;https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-07T15:30:31Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/economic-times-march-30-2018-your-mobile-apps-have-the-permission-to-spy-on-you">
    <title>Your mobile apps have the permission to spy on you</title>
    <link>https://cis-india.org/internet-governance/news/economic-times-march-30-2018-your-mobile-apps-have-the-permission-to-spy-on-you</link>
    <description>
        &lt;b&gt;The top applications on the Android Play store in India seek permission like access to your camera, microphone, modify contacts and download files without notifications depending on the use of the app.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="https://economictimes.indiatimes.com/small-biz/startups/newsbuzz/your-mobile-apps-have-the-permission-to-spy-on-you/articleshow/63541312.cms"&gt;Economic Times&lt;/a&gt; on March 30, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;“What we need is, not just knowing what permissions are being sought, but &lt;span&gt;why they need such permissions,” said Pranesh Prakash, policy director of the Centre for Internet and Society.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img alt="Untitled-2" src="https://economictimes.indiatimes.com/img/63541363/Master.jpg" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Companies such as TrueCaller say that app developers should only be permitted to collect data that they can demonstrate as proportionate and “necessary for the stated purpose of their service”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;An Uber spokesperson said they provide users with an option to turn off certain permissions like location and phone contacts within the privacy settings on app along with explanations on what data they collect and the reason behind it. Others declined comment.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/economic-times-march-30-2018-your-mobile-apps-have-the-permission-to-spy-on-you'&gt;https://cis-india.org/internet-governance/news/economic-times-march-30-2018-your-mobile-apps-have-the-permission-to-spy-on-you&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-03T15:48:47Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/the-hindu-march-31-2018-saurya-sengupta-if-data-is-new-oil-how-much-an-indian-citizen-lose">
    <title>If data is the new oil, how much does an Indian citizen lose?</title>
    <link>https://cis-india.org/internet-governance/news/the-hindu-march-31-2018-saurya-sengupta-if-data-is-new-oil-how-much-an-indian-citizen-lose</link>
    <description>
        &lt;b&gt;Surveillance capitalism is the business model of the Internet, so what exactly are we talking about here?&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Saurya Sengupta was published in the &lt;a class="external-link" href="http://www.thehindu.com/sci-tech/technology/location-location-location/article23393171.ece"&gt;Hindu&lt;/a&gt; on March 31, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;“We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” That was the former executive chairman of Google, Eric Schmidt, trying to convince users that the tech giants did care about their privacy, ironically enough. But that was in 2010.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Fast forward eight years, and a lot has changed. The world has been rattled by revelations that the personally identifiable data of about 50 million Facebook users was breached by an analytics firm. Since then, the skeletons haven’t stopped tumbling out, with the news that the NaMo app asks for as many as 22 permissions from users, and that the official Congress app, since deleted, was vulnerable to data breach.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Bruce Schneier, an American security technologist and fellow at Harvard University’s Berkman Klein Center for Internet &amp;amp; Society, in his book &lt;em&gt;Data and Goliath&lt;/em&gt;, says: “Google knows what kind of porn each of us searches for, which old lovers we still think about, our shames, our concerns, and our secrets.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;So, what does any of this mean for us, the lay users?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It may be helpful to start by asking what this ‘data’ is. “Whenever you use any service on your phone or browser, you end up giving a lot more information than you consciously recall. This includes not just the content of your interactions, but also metadata and so on,” says Nayantara Ranganathan, manager of the &lt;a href="http://www.thehindu.com/tag/541-428/internet/?utm=bodytag"&gt;&lt;span&gt;Internet &lt;/span&gt;&lt;/a&gt;Democracy Project’s Freedom of Expression programme. Metadata is, simply put, data about your data. So, for example, your location information, what time you were home, how many times you made calls to a certain number, and so on.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“This is known as behavioural data,” says Sunil Abraham, executive director of The Centre for Internet &amp;amp; Society, “which includes how fast or slow you scrolled, how long you stayed on a page, how many times you went to a particular part of a website, and so on.”&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;&lt;/h2&gt;
&lt;h2 style="text-align: justify; "&gt;Bhajan or you?&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;This is not just data gathered by the large Facebook and Gmail apps, but also by a lot of the smaller ones. An app that plays bhajans, for example, may mine your data and share it. And what do the third parties do with this? Well, the idea is to simply embed you further in a consumerist panopticon.&lt;/p&gt;
&lt;div class="infobox-container ng_infobox" style="float: left; text-align: justify; "&gt;
&lt;div class="infobox-heading"&gt;To FB or not to be&lt;/div&gt;
&lt;div class="infobox-description"&gt;
&lt;ul&gt;
&lt;li&gt;As #DeleteFacebook gets louder, users agonise about leaving Facebook on Facebook, irony be damned&lt;/li&gt;
&lt;li&gt;Truth is, quitting FB won't help. Because it's also about Google Photos and Maps and Candy Crush and Which Disney Villain Are You&lt;/li&gt;
&lt;li&gt;In the absence of laws, you've no control of what apps can do with your data. Even after you've 'deleted' it&lt;/li&gt;
&lt;li&gt;Facebook doesn't take responsibility for data collected by apps, and refers users to app developers instead&lt;/li&gt;
&lt;li&gt;Quitting FB and other apps might be a privilege and not an option for most&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p style="text-align: justify; "&gt;“Surveillance capitalism is the business model of the Internet, and all social media apps make their money collecting data on users and monetising that,” says Schneier.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Lots of apps have no revenue generation. Their only benefit is data,” says Manan Shah, founder and CEO of Avalance Global Solutions, a cyber security firm. In fact, he says, apps like WhatsApp are the obvious suspects while the smaller ones, like the bhajan one, slip under the radar.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;All of it is part of ‘lead generation’ — the process of identifying potential customers for a service or business. “A call-centre is useless without data,” Shah says. “If I want to sell you an antivirus, for instance, a company will identify filters — who owns a computer, who has already purchased an antivirus, and so on. I can then target that user. This filtered data is often your full name, bank details, data about your debit and credit cards. Abraham says there is another fairly obvious purpose for all this data collection – to get you to spend as much time on the said platform as possible.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This explains why, for example, when you Google something, the suggested searches are often tailored in an eerie manner. If you search for a word, the second search suggestion will offer to get that word translated into the local language. So if you’re in Chennai, Tamil, or into Marathi if you’re in Mumbai.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This a product of profiling your location data as well as behavioural data. “Imagine the kind of insights your location information over the course of a month can expose: your residence, where you spend your mornings, your route to work, your loved one’s residence, and more,” says Ranganathan.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Users are often not aware that they’ve given their consent to sharing this data,” says Nikhil Pahwa, digital rights activist. “The terms and conditions of every app are so complicated and voluminous that often you have no way of knowing what something is being used for and what you’ve given your permission to. That’s a failure of the kind of consent we have today,” he says. If an app developer, quips Pahwa, puts in a condition saying the user will name their first child after the app, the user is more than likely to click on ‘I agree’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the failure to make consent transparent is illegal, data collection in itself is a grey area. And what constitutes ‘misuse’ of data is murky because of the lack of regulations and clear outlines. “What if a salon has your phone number and sends an SMS saying your haircut is due,” asks S. Anand, CEO of data science firm Gramener. “Would you consider that misuse?”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It gets more ominous.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;&lt;/h2&gt;
&lt;h2 style="text-align: justify; "&gt;We’ll use it some day&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;At present, India has no law to stop apps from sharing your data with data brokers or data analytics firms. “The tendency has been to collect as much data as you can, even if it isn't relevant to your business today, because it might be some day or, better still, it might be valuable to others,” says Amba Kak, a Mozilla technology policy fellow. “This is why we need a law to say — collect what you need, not what you want.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As an Indian citizen, your data today is breached, misused or sold, there is little you can do about it. “At most, users can be more vigilant about the apps they download, what permissions they give, and evaluate whether there are better alternatives,” says Ranganathan.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“One can approach a court and seek redress under the IT Act,” says Abraham, “but only if you have suffered a loss of property or money. If your data has been breached or leaked, and you haven’t suffered a monetary or property loss, there’s nothing you can do.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Justice Srikrishna committee, set up in July, is right now working on a draft data protection bill. The committee published a white paper last November, and a final report is expected by end of May. “The white paper itself looks fantastic,” Abraham tells me.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;An ideal data protection law, says Kak, “will reflect the Supreme Court’s recent decision that all interference with the right to privacy must be necessary and proportionate.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;If data sharing is inevitable in the digital age, then it could be made illegal, for instance, to share data that can identify individuals. Anand says, “This could be done by replacing all names with a new random name or by aggregating total purchases by store and product rather than by individual purchase.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;So in an era where we have been casually asked to accept that ‘data is the new oil’, who is the biggest loser? “Framing 'data' as the new oil is dangerous,” says Ranganathan. Kak agrees: “This is a tired analogy that doesn't seem to get us anywhere except to recognise that data is a source of profit for the private sector.” She would rather go with Turkish sociologist Zeynep Tufekci’s definition where we think of data privacy like clean air or safe drinking water. “It is a public good that we need to safeguard as a collective through laws that make controllers of data accountable,” says Kak.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/the-hindu-march-31-2018-saurya-sengupta-if-data-is-new-oil-how-much-an-indian-citizen-lose'&gt;https://cis-india.org/internet-governance/news/the-hindu-march-31-2018-saurya-sengupta-if-data-is-new-oil-how-much-an-indian-citizen-lose&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-03T15:42:31Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/financial-times-march-28-2018-narendra-modi-personal-app-sparks-india-data-privacy-row">
    <title>Narendra Modi’s personal app sparks India data privacy row</title>
    <link>https://cis-india.org/internet-governance/news/financial-times-march-28-2018-narendra-modi-personal-app-sparks-india-data-privacy-row</link>
    <description>
        &lt;b&gt;PM’s NaMo app sends user data to third party in US, says researcher.&lt;/b&gt;
        &lt;p&gt;&lt;span style="text-align: justify; "&gt;Sunil Abraham was quoted in the article published by &lt;/span&gt;&lt;a class="external-link" href="https://www.ft.com/content/896cf574-31c0-11e8-b5bf-23cb17fd1498" style="text-align: justify; "&gt;Financial Times&lt;/a&gt;&lt;span style="text-align: justify; "&gt; on March 28, 2018.&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;“People are outraged that there is a peephole,” says Sunil Abraham, executive director &lt;span&gt;of the Bangalore-based Centre for Internet and Society, a non-profit research &lt;/span&gt;&lt;span&gt;organisation. “They are not outraged that anyone has looked into the peephole — &lt;/span&gt;&lt;span&gt;because there is no evidence of that yet.”&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For Mr Abraham, however, the controversy demonstrates that “Indian political parties have a voracious appetite for political data. If unchecked by law or public outrage, they &lt;span&gt;will continue to hoover up as much data as they can from our devices.”&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;div id="_mcePaste" style="text-align: justify; "&gt;&lt;span&gt;“Privacy is definitely a political issue,” says Mr. Abraham. “Political parties are reacting not because they will get into trouble under the law. They are reacting because they areafraid their supporters may not like it.”&lt;/span&gt;&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/financial-times-march-28-2018-narendra-modi-personal-app-sparks-india-data-privacy-row'&gt;https://cis-india.org/internet-governance/news/financial-times-march-28-2018-narendra-modi-personal-app-sparks-india-data-privacy-row&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-03-28T16:17:32Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/business-standard-march-28-2018-sunil-abraham-cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook">
    <title>Cambridge Analytica scandal: How India can save democracy from Facebook</title>
    <link>https://cis-india.org/internet-governance/blog/business-standard-march-28-2018-sunil-abraham-cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook</link>
    <description>
        &lt;b&gt;Hegemonic incumbents like Google and Facebook need to be tackled with regulation; govt should use procurement power to fund open source alternatives.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="http://www.business-standard.com/article/economy-policy/cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook-118032800146_1.html"&gt;Business Standard&lt;/a&gt; on March 28, 2018&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;&lt;em&gt;The Cambridge Analytica scandal came to light when &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=whistleblower" target="_blank"&gt;whistleblower &lt;/a&gt;Wylie accused Cambridge Analytica of gathering details of 50 million Facebook users. Cambridge Analytica used this data to psychologically profile these users and manipulated their opinion in favour of Donald Trump. BJP and Congress have accused each other of using the services of Cambridge Analytica in India as well. How can India safeguard the democratic process against such intervention? The author tries to answer this question in this Business Standard Special.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;&lt;em&gt;&lt;/em&gt;&lt;/strong&gt;Those that celebrate the big data/artificial intelligence moment claim that traditional approaches to data protection are no longer relevant and therefore must be abandoned. The Cambridge Analytica episode, if anything, demonstrates how wrong they are. The principles of data protection need to be reinvented and weaponized, not discarded. In this article I shall discuss the reinvention of three such data protection principles. Apart from this I shall also briefly explore competition law solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;&lt;em&gt;Collect data only if mandated by regulation&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;&lt;em&gt;&lt;/em&gt;&lt;/strong&gt;One, data minimization is the principle that requires the data controller to collect data only if mandated to do so by regulation or because it is a prerequisite for providing a functionality. For example, Facebook’s messenger app on Android harvests call records and meta-data, without any consumer facing feature on the app that justifies such collection. Therefore, this is a clear violation of the data minimization principle. One of the ways to reinvent this principle is by borrowing from the best practices around warnings and labels on packaging introduced by the global anti-tobacco campaign. A permanent bar could be required in all apps, stating ‘Facebook holds W number of records across X databases over the time period Y, which totals Z Gb’. Each of these alphabets could be a hyperlink, allowing the user to easily drill down to the individual data record.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;Consent must be explicit, informed and voluntary&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/em&gt;Two, the principle of consent requires that the data controller secure explicit, informed and voluntary consent from the data subject unless there are exceptional circumstances. Unfortunately, consent has been reduced to a mockery today through obfuscation by lawyers in verbose “privacy notices” and “terms of services”. To reinvent consent we need to bring ‘Do Not Dial’ registries into the era of big data. A website maintained by the future Indian data protection regulator could allow individuals to check against their unique identifiers (email, phone number, Aadhaar). The website would provide a list of all data controllers that are holding personal information against a particular unique identifier. The data subject should then be able to revoke consent with one-click. Once consent is revoked, the data controller would have to delete all personal information that they hold, unless retention of such information is required under law (for example, in banking law). One-click revocation of consent will make data controllers like Facebook treat data subjects with greater respect.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;There must be a right to &lt;/strong&gt;&lt;/em&gt;&lt;em&gt;&lt;strong&gt;explanation&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/em&gt;Three, the right to explanation, most commonly associated with the General Data Protection Directive from the EU, is a principle that requires the data controller to make transparent the automated decision-making process when personal information is implicated. So far it has been seen as a reactive measure for user empowerment. In other words, the explanation is provided only when there is a demand for it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Facebook feeds that were used for manipulation through micro-targeting of content is an example of such automated decision making. Regulation in India should require a user empowerment panel accessible through a prominent icon that appears repeatedly in the feed. On clicking the icon the user will be able to modify the objectives that the algorithm is maximizing for. She can then choose to see content that targets a bisexual rather than a heterosexual, a Muslim rather than a Hindu, a conservative rather a liberal, etc. At the moment, Facebook only allows the user to stop being targeted for advertisements based on certain categories. However, to be less susceptible to psychological manipulation, the user should be allowed to define these categories, for both content and advertisements.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;How to fix the business model?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/em&gt;From a competition perspective, Google and Facebook have destroyed the business model for real news, and replaced it with a business model for fake news, by monopolizing digital advertising revenues. Their algorithms are designed to maximize the amount of time that users spend on their platforms, and therefore, don’t have any incentive to distinguish between truth and falsehood. This contemporary crisis requires three types of interventions: one, appropriate taxation and transparency to the public, so that the revenue streams for fake news factories can be ended; two, the construction of a common infrastructure that can be shared by all traditional and new media companies in order to recapture digital advertising revenues; and three, immediate action by the competition regulator to protect competition between advertising networks operating in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;The Google challenge&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/em&gt;With Google, the situation is even worse, since Google has dominance in both the ad network market and in the operating system market. During the birth of competition law, policy-makers and decision-makers acted to protect competition per se. This is because they saw competition as an essential component of democracy, open society, innovation, and a functioning market. When the economists from the Chicago school began to influence competition policy in the USA, they advocated for a singular focus on the maximization of consumer interest. The adoption of this ideology has resulted in competition regulators standing powerlessly by while internet giants wreck our economy and polity. We need to return to the foundational principles of competition law, which might even mean breaking Google into two companies. The operating system should be divorced from other services and products to prevent them from taking advantage of vertical integration. We as a nation need to start discussing the possible end stages of such a breakup.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In conclusion, all the fixes that have been listed above require either the enactment of a data protection law, or the amendment of our existing competition law. This, as we all know, can take many years. However, there is an opportunity for the government to act immediately if it wishes to. By utilizing procurement power, the central and state governments of India could support free and open source software alternatives to Google’s products especially in the education sector. The government could also stop using Facebook, Google and Twitter for e-governance, and thereby stop providing free advertising for these companies for print and broadcast media. This will make it easier for emerging firms to dislodge hegemonic incumbents.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/business-standard-march-28-2018-sunil-abraham-cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook'&gt;https://cis-india.org/internet-governance/blog/business-standard-march-28-2018-sunil-abraham-cambridge-analytica-scandal-how-india-can-save-democracy-from-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-03-28T15:44:00Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/business-standard-mayank-jain-march-27-2018-uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts">
    <title>UIDAI servers or third parties, Aadhaar leaks are dangerous: Experts</title>
    <link>https://cis-india.org/internet-governance/news/business-standard-mayank-jain-march-27-2018-uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts</link>
    <description>
        &lt;b&gt;Even though the UIDAI has denied these reports, its arguments rest on shaky grounds, according to experts.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Mayank Jain was published in &lt;a class="external-link" href="http://www.business-standard.com/article/current-affairs/uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts-118032601008_1.html"&gt;Business Standard&lt;/a&gt; on March 27, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;The government has told the Supreme Court that the Aadhaar data “remains safely behind 13-feet high walls” and it will take “the age of the universe” to break one key in the Unique Identification Authority of India’s (UIDAI’s) encryption.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Even if this claim is taken at face value, experts suggest leaks from third-party databases seeded with Aadhaar numbers are equally dangerous and the UIDAI is responsible for the damage. &lt;span&gt;The most recent case came from a report published online and it said random numbers could provide access to the Aadhaar data, which also includes people’s financial information, from a state-owned company’s database. &lt;/span&gt;&lt;span&gt;Even though the UIDAI has denied these reports, its arguments rest on shaky grounds, according to experts.“There is no truth in this story as there has been absolutely no breach of the UIDAI’s Aadhaar database.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aadhaar remains safe and secure,” the UIDAI said on Twitter shortly after the story broke on ZDNet.The authority added even if the report was taken to be true, “it would raise security concerns on the database of that Utility Company and has nothing to do with the security of the UIDAI’s Aadhaar database”.This has been the authority’s defence in several such cases but those in the know of things say it doesn’t hold water simply because the Aadhaar data is not concentrated in the UIDAI’s complexes anymore and has spread across various databases.“Publishing this by the state entities is a violation under the Aadhaar Act.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Even if you publish your Aadhaar number, it is a violation of the law,” said Pranesh Prakash, policy director at the Centre for Internet and Society.“Saying that the UIDAI has not been compromised is thoroughly insufficient because for customers, it doesn’t matter if the leak comes from servers operated by the UIDAI or from others holding copies of the UIDAI database.”Prakash said it should be the authority’s responsibility to help others comply with the law and prevent data leaks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He gave the example of biometric leaks from Gujarat government servers and how criminals used them to forge fingerprints.The possibility of data leaks was demonstrated when Robert Baptiste, purportedly a French app developer, announced on Twitter how he got access to thousands of scanned Aadhaar card copies through simple Google searches.In an interview to Business Standard, Baptiste said the major threat was data handling by third parties, which could lead to identity theft.Even the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, has provisions that debar making public citizens’ Aadhaar-related information public unless required for certain purposes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Whoever intentionally discloses, transmits, copies or otherwise disseminates any identity information collected in the course of enrolment or authentication to any person not authorised under this Act” can be in jail for three years and pay a fine of ~10,000 under the Act.A lawyer appearing on the petitioners’ side in the ongoing Supreme Court case on the constitutional validity of Aadhaar said only the UIDAI had the powers to file cases against people who published Aadhaar information. Hence everyone else is helpless despite the leaks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The UIDAI’s argument that Aadhaar information can’t be misused is duplicitous because the regulations under the Aadhaar Act assure individuals that if biometric authentication fails, they should have other means of identifying themselves, says Kiran Jonnalagadda, founder of HasGeek.“So the regulations guarantee that anyone in possession of stolen identity information will be able to misuse it without biometric authentication,” he said.Prakash agreed with this. He said demographic authentication, which is an acceptable authentication method under the Aadhaar Act, was prone to misuse as long as Aadhaar numbers remained public.“Aadhaar is used as just a piece of paper, unlike security features embedded in passports or even permanent account number cards. Thus, demographic authentication merely involves providing Aadhaar numbers and details like addresses, which can be used even for things like getting entry into an airport by just printing a ticket and having a fake Aadhaar,” he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;Queries sent to the UIDAI were not answered till the time of going to press&lt;/em&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/business-standard-mayank-jain-march-27-2018-uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts'&gt;https://cis-india.org/internet-governance/news/business-standard-mayank-jain-march-27-2018-uidai-servers-or-third-parties-aadhaar-leaks-are-dangerous-experts&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-03-27T02:16:55Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/economic-times-march-26-2018-nilesh-christopher-security-experts-say-need-to-secure-aadhaar-ecosystem-warn-about-third-party-leaks">
    <title>Security experts say need to secure Aadhaar ecosystem, warn about third party leaks </title>
    <link>https://cis-india.org/internet-governance/news/economic-times-march-26-2018-nilesh-christopher-security-experts-say-need-to-secure-aadhaar-ecosystem-warn-about-third-party-leaks</link>
    <description>
        &lt;b&gt;The public reckoning of data leaks in India’s national ID database, Aadhaar is still on hold while reports of data leakage through third-parties keep coming. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Nilesh Christopher was published in &lt;a class="external-link" href="https://economictimes.indiatimes.com/news/politics-and-nation/there-is-a-need-to-secure-full-aadhaar-ecosystem-experts/articleshow/63459367.cms"&gt;Economic Times&lt;/a&gt; on March 26, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;While the Unique Identification Authority of India (UIDAI) has maintained that its database is secure and there are no breaches of &lt;a class="external-link" href="https://economictimes.indiatimes.com/topic/Aadhaar"&gt;Aadhaar&lt;/a&gt; data from its system, security researchers warn that leaks are happening in third-party sites and it is important for the agency to ensure that its ecosystem adopts measures to keep data safe.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the Unique Identification Authority of India (&lt;a class="external-link" href="https://economictimes.indiatimes.com/topic/UIDAI"&gt;UIDAI&lt;/a&gt;) has maintained that its database is secure and there are no breaches of Aadhaar data from its system, security researchers warn that leaks are happening in third-party sites and it is important for the agency to ensure that its ecosystem adopts measures to keep data safe.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Securing an entire ecosystem is more important than secure individual databases,” said security researcher Srinivas Kodali. Over the weekend, technology publication &lt;a class="external-link" href="https://economictimes.indiatimes.com/topic/ZDnet"&gt;ZDnet &lt;/a&gt;citing an Indian security researcher said that it identified Aadhaar data leaks on a system run by a state-owned utility company &lt;a class="external-link" href="https://economictimes.indiatimes.com/topic/Indane"&gt;Indane&lt;/a&gt; that allowed anyone to access sensitive information like a name, Aadhar number, bank details. The leak was plugged soon after the report appeared.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;UIDAI came out with a strong statement denying the breach. “There is no truth in the story as there has been absolutely no breach of UIDAI’s Aadhaar database. Aadhaar remains safe and secure,” the government agency said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There have been no reports of any breach in the core database so far. However, it is the third-parties that have acted as weak links.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The simple parallel that can be drawn is, though Facebook’s core database of users information was secure, the data leak happened through third-party developers and organisation like Cambridge Analytica that have allegedly misused it,” Kodali said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In case of Aadhar too, the allegations of breaches have not been on ‘Aadhaar database’ but rather at insecure government websites and third-parties with API access to the database. “In this aspect, the issue in Facebook and Aadhaar is similar. In both the cases there was no breach of database, but it was third parties that acted as the weakest link. In both cases, it was a legitimate means of access through API that was open for abuse,” said Sunil Abraham, executive director, Center for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;UIDAI could take a leaf from Indian Space Research Organisation while handling &lt;a class="external-link" href="https://economictimes.indiatimes.com/topic/data-breach"&gt;data breach&lt;/a&gt; reports. The state-run space agency put out a note appreciating security researches for their efforts. An email ID to report flaws is more important than summoning people regarding data breaches.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The fear of criminal prosecution hanging over the heads of ethical hackers would not help us develop a robust and strong security architecture,” said Karan Saini, a Delhi-based security researcher who first highlighted the Aadhaar leak at Indane.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“UIDAI is working on a policy to enable security experts to report issues in a legal and safe manner,” tweeted Ajay Bhushan Pandey, chief executive of India's Unique Identification Authority (UIDAI), the government department that administers the Aadhaar database. Seven months after the tweet, Pandey’s promise of a bug-reporting mechanism has still has not fructified.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/economic-times-march-26-2018-nilesh-christopher-security-experts-say-need-to-secure-aadhaar-ecosystem-warn-about-third-party-leaks'&gt;https://cis-india.org/internet-governance/news/economic-times-march-26-2018-nilesh-christopher-security-experts-say-need-to-secure-aadhaar-ecosystem-warn-about-third-party-leaks&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-03-26T22:37:30Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/first-post-march-26-2018-indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr">
    <title>Indian IT firms not ready for European Union's proposed privacy laws, only a few compliant with GDPR</title>
    <link>https://cis-india.org/internet-governance/news/first-post-march-26-2018-indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr</link>
    <description>
        &lt;b&gt;Only a third of Indian IT firms are compliant with the European Union's General Data Protection Regulation (GDPR), which will come into force on 25 May, according to a media report.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="https://www.firstpost.com/business/indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr-4405679.html"&gt;published in First Post&lt;/a&gt; on March 26, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The GDPR, the EU's new online privacy rules, is designed to protect users' online privacy. The European Parliament has adopted the regulation but European governments have yet to approve the text.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Only 30-35 percent of all IT/ITeS companies have started their journey to work towards GDPR compliance,” Jaspreet Singh, Cyber Security Partner at EY, was quoted as saying by &lt;em&gt;&lt;a href="https://economictimes.indiatimes.com/tech/ites/only-a-third-of-indian-it-companies-ready-for-eu-privacy-laws/articleshow/63456683.cms" rel="nofollow" target="_blank"&gt;The Economic Times&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The GDPR is applicable to companies globally, and has significant potential financial penalties. Damages of any breach of privacy of user data from Europe could cost companies as much as four percent of their revenue, according to &lt;em&gt;The Economic Times&lt;/em&gt;. For the Indian IT sector, Europe ranks number two in terms of the amount of business it drives, with US still taking the lead.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Indian firms, according to &lt;a href="http://www.business-standard.com/article/companies/indian-firms-slow-on-cybersecurity-might-gain-from-eu-s-upcoming-gdpr-118030200683_1.html" rel="nofollow" target="_blank"&gt;&lt;em&gt;Business Standard&lt;/em&gt;&lt;/a&gt;, are struggling to understand the GDPR policies. A survey by EY had shown that 60 percent of Indian respondents were unfamiliar with the new regulation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"When asked to describe their company’s current status with respect to complying with the GDPR, only 33 percent of respondents said that they have a plan, while 39 percent said that they are not familiar with the GDPR at all and 17 percent said that they have heard of the GDPR but have not yet taken any action," EY’s &lt;a href="http://www.ey.com/Publication/vwLUAssets/ey-how-can-you-disrupt-risk-in-an-era-of-digital-transformation/$FILE/ey-how-can-you-disrupt-risk-in-an-era-of-digital-transformation.pdf" rel="nofollow" target="_blank"&gt;Global Forensic Data Analytics Survey&lt;/a&gt; 2018 had said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;What the GDPR is all about?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The GDPR attempts to unify data protection laws across the EU. It applies to all companies, regardless of location, that process the personal data of people living in the European Union.  It aims to strengthen the protection of EU citizens' personal details. It will apply to all companies, including those outside of the EU.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The GDPR is considered the biggest shake-up of personal data privacy rules since the birth of the internet. It is intended to give European citizens more control over their online information.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Under the new regulation, users will be asked once and for all whether to accept cookies, rather than every time they visit a new website. Users will have the option of going invisible online, while the rules enshrine the so-called "right to be forgotten" legislation. The industries most deeply affected will be those that collect large amounts of customer data and include technology companies, retailers, healthcare providers, insurers and banks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Companies must be able to provide European customers with a copy of their personal data and under some circumstances delete it at their behest. They will also be required to report data breaches within 72 hours.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;How Indian firms will be affected?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to &lt;a href="https://cis-india.org/internet-governance/files/gdpr-and-india" rel="nofollow" target="_blank"&gt;a study published by The Centre for Internet and Society&lt;/a&gt;, as a result of GDPR, data protection procedures like breach notification; excessive documentation and appointment of data protection officer may have to be incorporated in the Indian laws as well.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"As non – compliance involves high fines, inability of India or the organizations situated in India to qualify as data secure destinations is likely to divert business opportunities to safer locations," the study said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;(With inputs from agencies)&lt;/em&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/first-post-march-26-2018-indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr'&gt;https://cis-india.org/internet-governance/news/first-post-march-26-2018-indian-it-firms-not-ready-for-european-unions-proposed-privacy-laws-only-a-few-compliant-with-gdpr&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-18T00:56:20Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
