<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/internet-governance/blog/online-anonymity/search_rss">
  <title>We are anonymous, we are legion</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 601 to 615.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/jobs/vacancy-for-short-term-consultant-cyber-security"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-nina-c-george-april-17-2018-sad-truth-brutality-porn-has-many-takers-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/icann-61-readout"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/economic-times-nilesh-christopher-april-13-2018-facebooks-fake-news-clean-up-hits-language-barrier"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/bloomberg-quint-aayush-ailawadi-april-15-2018-is-this-the-beginning-of-the-end-for-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/livemint-prashant-k-nanda-and-komal-gupta-pension-wont-be-denied-for-want-of-aadhaar-epfo"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/the-week-anita-babu-april-8-2018-it-feeds-on-you"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/workshop-on-python"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/hindu-businessline-april-6-2018-govt-websites-face-major-outage-hacking-ruled-out"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/hindustan-times-vidhi-choudhary-and-yashwant-raj-facebook-data-breach-hit-over-5-6-lakh-users-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/jobs/vacancy-for-short-term-consultant-cyber-security">
    <title>Short-term Consultant (Cyber Security)</title>
    <link>https://cis-india.org/jobs/vacancy-for-short-term-consultant-cyber-security</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society is seeking an individual with strong understanding of cyber security to contribute research to its cyber security research under its Internet Governance programme.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Research topics include economic incentives for cyber security, cross border sharing of data, India’s cyber security framework, and cybersecurity dimensions of e-governance .&lt;/p&gt;
&lt;p dir="ltr"&gt;Note: This position is consultancy based on output.&lt;/p&gt;
&lt;p dir="ltr"&gt;Compensation: Based on experience and output.&lt;/p&gt;
&lt;p dir="ltr"&gt;Application requirements: two writing samples and CV&lt;/p&gt;
&lt;p dir="ltr"&gt;Contact: &lt;a href="mailto:elonnai@cis-india.org"&gt;elonnai@cis-india.org&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/jobs/vacancy-for-short-term-consultant-cyber-security'&gt;https://cis-india.org/jobs/vacancy-for-short-term-consultant-cyber-security&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>elonnai</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-20T01:27:36Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi">
    <title>Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</link>
    <description>
        &lt;b&gt;This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.&lt;/b&gt;
        &lt;p&gt;&lt;span&gt;Event Report: &lt;/span&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance"&gt;Download&lt;/a&gt;&lt;span&gt; (PDF)&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation  from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Meaning and Scope of AI&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;AI Sectoral Applications&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Human Involvement with Automated Decision Making&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;The Social and Power Relations Surrounding AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Regulatory Approaches to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Challenges to Adopting AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p id="_mcePaste"&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;div&gt;&lt;/div&gt;
&lt;p style="text-align: justify; "&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency  and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;[1]. Automated decision making model where final decisions are made by a human operator&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[3]. A completely autonomous decision making model requiring no human involvement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[4]. https://futureoflife.org/ai-principles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Saman Goudarzi and Natallia Khaniejo</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-03T15:49:40Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-nina-c-george-april-17-2018-sad-truth-brutality-porn-has-many-takers-in-india">
    <title>Metrolife: Brutality porn has sadly many takers in India</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-nina-c-george-april-17-2018-sad-truth-brutality-porn-has-many-takers-in-india</link>
    <description>
        &lt;b&gt;The name of the eight-year old Kathua rape victim is trending not just on social media but also on a porn site.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;On Monday it had topped the list of the most-searched names on a porn site, triggering surprise and outrage. An official at the Centre for Internet &amp;amp; Society, Bengaluru attributes the curiosity to a "depraved, messed-up" mind.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Swaraj Barooah, senior programme manager at the centre, says, "It takes numbers to make something trend online." He attributes the unhealthy curiosity in rape footage to a lack of proper sex education in schools.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Violent acts are ingrained in the power politics and hierarchy of our society, he observes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"There are two categories that are searched online...revenge and brutality porn. While revenge porn is usually uploaded by couples who break up, brutality porn is done without recognising the humanity of the person involved," Barooah says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Psychiatrists think people who search for rape videos have a "sick and deviant mind".&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The article by Nina C. George was published in &lt;a class="external-link" href="https://www.deccanherald.com/features/metrolife/sad-truth-brutality-porn-has-many-takers-india-665093.html"&gt;Deccan Herald&lt;/a&gt; on April 18, 2018.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-nina-c-george-april-17-2018-sad-truth-brutality-porn-has-many-takers-in-india'&gt;https://cis-india.org/internet-governance/news/deccan-herald-nina-c-george-april-17-2018-sad-truth-brutality-porn-has-many-takers-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-19T13:15:57Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/icann-61-readout">
    <title>ICANN 61 Readout</title>
    <link>https://cis-india.org/internet-governance/news/icann-61-readout</link>
    <description>
        &lt;b&gt;Akriti Bopanna attended an ICANN61 Readout session on the 19th of April at the International Institute of Information Technology at Electronic City in Bengaluru. &lt;/b&gt;
        &lt;p&gt;Click to read the agenda &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/icann-61-agenda"&gt;here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/icann-61-readout'&gt;https://cis-india.org/internet-governance/news/icann-61-readout&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>ICANN</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-05-05T09:18:03Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/economic-times-nilesh-christopher-april-13-2018-facebooks-fake-news-clean-up-hits-language-barrier">
    <title>Facebook’s fake news clean-up hits language barrier  </title>
    <link>https://cis-india.org/internet-governance/news/economic-times-nilesh-christopher-april-13-2018-facebooks-fake-news-clean-up-hits-language-barrier</link>
    <description>
        &lt;b&gt;The sheer diversity of India’s ethnic languages could defeat Facebook’s move to get content moderators and use artificial intelligence (AI) to counter the spread of misinformation on its platform ahead of the general elections next year, experts said. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Nilesh Christopher was published in the &lt;a class="external-link" href="https://economictimes.indiatimes.com/tech/internet/facebooks-fake-news-clean-up-hits-language-barrier/articleshow/63741507.cms"&gt;Economic Times&lt;/a&gt; on April 13, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;More than a third of Indian users engage on the social media platform in local languages. Experts are sceptical about the extent to which AI tools could be effective in curbing fake news, given that Facebook’s AI engine is primarily trained to recognise English.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“If the whole country speaks the same language, it is not a problem. But India is a country with multiple languages and their dialects and the 20,000 global numbers at the face of it doesn’t sound enough,” says Sunil Abraham, executive director for the Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In July 2017, India became the largest user base for Facebook with over 241million users, with a majority of them accessing the social network on their smartphones. Facebook has not shared updated user numbers since then.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the use of AI in weeding out fake news, Abraham says that such tools usually work only in languages where there is a history of natural language processing. “Languages like English have a huge corpora (large databases of digitised content from the language). In such cases, the AI analyses the language and will be more accurate,” said Abraham. “Whereas in Indic languages, there is no training data. How they would use AI is not clear. For many Indian languages, the basic infrastructure doesn’t  exist."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“I don’t know exactly what FB claimed. But understanding local languages, Indian languages, is still an unsolved problem — either in non-free software or free software,” says Anivar Aravind, executive director of Indic Project, a nonprofit initiative working on language engineering and digital rights of native-language users. Interestingly, the two Facebook-owned platforms: WhatsApp and FB have become the preferred social medium to spread false information and largely through regional languages.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Facebook declined to comment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On Tuesday, Facebook founder Mark Zuckerberg in his testimony to the US congress promised measures such as building and deploying AI tools that take down fake news and increasing the content moderation team to around 20,000. He cited the forthcoming elections in India to point out that Facebook would verify every political advertiser and said, “to make sure that that kind of interference that the Russians were able to do in 2016 is going to be much harder for anyone to pull off in the future.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Currently, Facebook has around 15,000 moderators who review content to identify fake news on the platform, Zuckerberg said last week. The social media giant has been accused of not protecting user privacy and allowing voter-profiling firm Cambridge Analytica to harvest personal information of 87 million Facebook users without explicit permissions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Cambridge Analytica has been accused of voter manipulation in several countries, including India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Content moderators in India are seeing business grow driven by internet platforms scrambling to curb fake news across the globe. “Zuckerberg’s mention to prevent misinformation by increasing scrutiny before elections is positive. They are the market leaders. This is likely to create more awareness and it is encouraging to hear Facebook take a proactive role,” said Suman Howladar, Founder of Foiwe Info Global Solutions, a content moderation company.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/economic-times-nilesh-christopher-april-13-2018-facebooks-fake-news-clean-up-hits-language-barrier'&gt;https://cis-india.org/internet-governance/news/economic-times-nilesh-christopher-april-13-2018-facebooks-fake-news-clean-up-hits-language-barrier&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-17T16:15:03Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/bloomberg-quint-aayush-ailawadi-april-15-2018-is-this-the-beginning-of-the-end-for-facebook">
    <title>Is This The Beginning Of The End For Facebook?</title>
    <link>https://cis-india.org/internet-governance/news/bloomberg-quint-aayush-ailawadi-april-15-2018-is-this-the-beginning-of-the-end-for-facebook</link>
    <description>
        &lt;b&gt;After two days of congressional hearings that collectively lasted over ten hours, there are many questions about Facebook, its policies and its future that experts are debating.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Aayush Ailawadi was &lt;a class="external-link" href="https://www.bloombergquint.com/technology/2018/04/15/is-this-the-beginning-of-the-end-for-facebook"&gt;published in Bloomberg Quint&lt;/a&gt; on April 15, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;Do Facebook’s privacy policies confuse more than they inform? Is the platform a near monopoly that may need to be broken? And how do you ensure that the vast wealth of data that Facebook has is not misused, particularly in elections?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;BloombergQuint has collected views on some of these issues.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Privacy Policy Or Legalese?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Since the Cambrdge Analytica &lt;a href="https://www.bloombergquint.com/quicktakes/2018/03/21/understanding-the-facebook-cambridge-analytica-story-quicktake" target="_blank"&gt;scandal came to light&lt;/a&gt;, Facebook has been receiving a lot of flak for its ambiguous and verbose privacy and data policy. Lawmakers quizzed founder Mark Zuckerberg about how an ordinary user was expected to decipher the terms of the user agreement, something even some of the lawmakers grilling him couldn’t comprehend.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Jitendra Waral of Bloomberg Intelligence says, “It’s so complicated that nobody reads it. Essentially the data sharing beyond the Facebook ecosystem came into question here. Is it just necessary to have data sharing for the service to work? Is it restricted to you sharing your content with your friends  in your network or do the restrictions go beyond that? So basically they have a lot of work to do in terms of transparency, in terms how the data is used and shared.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;During the conversations, it also came to light that Facebook collects data even on those who don’t use the platform.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“In general we collect data on people who are not signed up for Facebook for security purposes," Zuckerberg said Wednesday &lt;a href="https://www.bloomberg.com/news/articles/2018-04-11/zuckerberg-says-facebook-collects-internet-data-on-non-users" target="_blank"&gt;in a hearing about the social network’s privacy practices in Washington&lt;/a&gt;before the House Energy and Commerce Committee.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While privacy experts and tech geeks have been crying foul for years about the data collection and storage practices adopted by tech behemoths like Facebook, this revelation by the Facebook founder was the first public acknowledgement of the fact.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Is Facebook A Monopoly?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;It’s not just data concerns that were brought up at the hearings.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Sen. Lindsey Graham asked Zuckerberg if Facebook enjoys a monopoly on the type of service it provides to its users. He asked, “If I buy a Ford and it doesn’t work well and I don’t like it, I can buy a Chevy, if I’m upset with Facebook, what’s the equivalent product that I can go sign up for?”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Zuckerberg responded to say that there are other tech companies which operate in the same sphere as Facebook does. He offered statistics of how many Americans use different social apps nowadays, in support of his argument that Facebook does not enjoy a monopoly in the tech world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Jeff Hauser, executive director of the Revolving Door Project at the non-partisan Center for Economic and Policy Research says, “ Zuckerberg's answer to who his competitor was kind of comically unsatisfying because there is no competition for Facebook and they do have monopoly power in the United States and in many other countries across the world. ”&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;So one idea is to take Facebook and break it into many other parts that it acquired through previous acquisitions. Instagram would be a powerful competitor to Facebook if it was independent of Facebook. WhatsApp would be a powerful competitor to Facebook if it was an independent competitor to Facebook.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Jeff Hauser, Center for Economic and Policy Research&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Time To Regulate The Internet?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Another big moment during the testimony was when Zuckerberg conceded that it was only a matter of time before the internet would be regulated.&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;He said, “The internet is growing in importance around the world in people’s lives and I think that it is inevitable that there will need to be some regulation.”&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Waral agrees that light touch regulation is the way to prevent a Cambridge Analytica like scandal from occurring again in the future. But, he believes that regulation will only raise costs for a company like Facebook. He explains, “What it does is raise compliance costs through out the ecosystem. So, the impact on Facebook from this is that the company is going to increase expenses due to compliance costs.”&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;The Big Election(s) Year&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;During his testimony, Zuckerberg did acknowledge that a lot needs to be done to ensure data does not get misused, particularly in elections. Concerns about misuse of user data have emerged in countries like the U.S., but also in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Last month, the Union Minister for Law and Information Technology, Ravi Shankar Prasad warned Zuckerberg that if there was any data theft of Indian users due to Facebook’s data collection practices, he would stop at nothing short of summoning the Facebook founder to India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While Pranesh Prakash, policy director at the Centre For Internet and Society, doesn’t believe the government would actually summon Zuckerberg to India, he says, “One new concern that's valid across the world, where there are limitations put on freedom of expression during times of campaigning and elections, how do they translate online? There is no typical answer to this.”&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;Most of the speech regulations apply to candidates and apply to  media platforms, which are largely mass media platforms. Now, social media platforms where individuals express themselves might not be regulated the same way or currently at least aren’t regulated the same way.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Pranesh Prakash, Policy Director, Centre For Internet and Society&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Pranesh thinks it is time to re-look at the existing election laws which might not prove to be as useful now as they were some time ago.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/copy3_of_Facebook.png" alt="Facebook" class="image-inline" title="Facebook" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Hauser thinks Facebook should help users discern between fakes news and a legitimate source of news.&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;In the 2016 elections cycle, for fake news, a lot of bots and trolls liked them and they started appearing in the lot of users’ feeds. So the algorithm of Facebook encouraged manipulation. Facebook needs to address these concerns. I don’t think we can trust Facebook if it doesn’t make hard decisions about its algorithms. Right now, Facebook needs to say this is what the algorithm does.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Jeff Hauser, Center for Economic and Policy Research&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/bloomberg-quint-aayush-ailawadi-april-15-2018-is-this-the-beginning-of-the-end-for-facebook'&gt;https://cis-india.org/internet-governance/news/bloomberg-quint-aayush-ailawadi-april-15-2018-is-this-the-beginning-of-the-end-for-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-17T14:44:23Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/livemint-prashant-k-nanda-and-komal-gupta-pension-wont-be-denied-for-want-of-aadhaar-epfo">
    <title>Pension won’t be denied for want of Aadhaar, says EPFO</title>
    <link>https://cis-india.org/internet-governance/news/livemint-prashant-k-nanda-and-komal-gupta-pension-wont-be-denied-for-want-of-aadhaar-epfo</link>
    <description>
        &lt;b&gt;The move is aimed at ensuring that no retired government employee is deprived of pension for want of Aadhaar or failure of fingerprint authentication.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Prashant K. Nanda and Komal Gupta published by &lt;a class="external-link" href="https://www.livemint.com/Politics/J0wTnWuLVVNsejAcJygdRO/Dont-delay-pension-disbursal-in-pretext-of-Aadhaar-linking.html"&gt;Livemint&lt;/a&gt; on April 11, 2018 quoted Pranesh Prakash.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;Tens of thousands of pensioners under the employees pension scheme will not be denied their monthly pension if their Aadhaar authentication fails or they do not have the 12-digit unique ID, the Employees Provident Fund Organisation (EPFO) has indicated.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The retirement fund manager has asked banks and post offices to facilitate pension disbursement without making senior citizens do the rounds.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The move comes after EPFO received several complaints of denial of pension by banks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;For paying pension to those whose fingerprint authentication fails, “banks may make provisions for iris scanner, along with the fingerprint scanner in bank branches. It has been observed that in many cases, iris authentication is successful even though fingerprint authentication may have failed. This is particularly true for many senior citizens. In such cases, digital life certificate may be generated on the basis of iris authentication and pension may be given,” the EPFO said in a circular on Monday.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;And when both iris and fingerprint authentication are not feasible, “an entry should be made in the exception register with reasons and pension may be provided on the basis of paper life certificate and physical Aadhaar card or E-Aadhaar card of the pensioner after due verification as deemed fit by the bank,” the circular said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The move is aimed at ensuring that no senior citizen is deprived of pension for want of Aadhaar or failure of fingerprint authentication.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Banks have been advised to ensure that benefits of the pension scheme reach the citizens and a proper mechanism for “handling exceptions” is put in place.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Banks should make special arrangements for the bed-ridden, differently abled, or senior citizens who are unable to visit the Aadhaar enrolment centre,” the circular said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;EPFO has also instructed pension disbursing banks and post offices to make necessary arrangements for enrolling pensioners for Aadhaar and to carry out authentication through iris, especially for those who cannot be verified through fingerprints.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The Unique Identification Authority of India (UIDAI) has been under the scanner over the past few months over allegations of access to pension being denied as the fingerprints of the elderly do not match biometrics in the Aadhaar database.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;So far, pensioners had to furnish a life certificate and needed to authenticate it using biometrics.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The fact that it is coming now means that the Unique Identification Authority of India’s claim in the Supreme Court about no person having been denied any benefit due to the lack of Aadhaar is simply untrue,” said Bengaluru-based Pranesh Prakash, an affiliated fellow with the Yale Law School’s Information Society Project that works on issues related to the intersection of law, technology and society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Prakash, however, welcomed EPFO’s move laying down “a procedure both for those who don’t have an Aadhaar number, as well as those whose biometrics fail for any reason”.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Prakash further said that “as per the UIDAI’s own data, failure rates for iris authentication are higher (8.54%) than for fingerprints (6%). So the utility of pushing for iris authentication is unclear.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are more than 1.2 billion Aadhaar holders in the country.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/livemint-prashant-k-nanda-and-komal-gupta-pension-wont-be-denied-for-want-of-aadhaar-epfo'&gt;https://cis-india.org/internet-governance/news/livemint-prashant-k-nanda-and-komal-gupta-pension-wont-be-denied-for-want-of-aadhaar-epfo&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-10T22:33:39Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/the-week-anita-babu-april-8-2018-it-feeds-on-you">
    <title>It feeds on you!</title>
    <link>https://cis-india.org/internet-governance/news/the-week-anita-babu-april-8-2018-it-feeds-on-you</link>
    <description>
        &lt;b&gt;A robust data protection law can prevent Facebook from manipulating users&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Anita Babu was published as a cover story in &lt;a class="external-link" href="https://www.theweek.in/theweek/cover/2018/03/31/facebook-scandal-robust-data-protection-law.html"&gt;The Week&lt;/a&gt; on April 8, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Soon after the Facebook-Cambridge Analytica scandal broke, a meme featuring Donald Duck began circulating on social media. It showed the cartoon character waking up to the news of the data leak, and then going back to sleep realising that the data was as “worthless” as he was.&lt;br /&gt;&lt;br /&gt;The meme struck a chord with the younger generation, which has learnt to laugh at its own triviality. But, what it misses is the fact that technology companies can profit from even the most insignificant set of personal data.&lt;br /&gt;&lt;br /&gt;Facebook, as its users know, is a marketing behemoth in the guise of a social media platform. By using it, people willingly give away information about themselves—their identity, relationship status, places visited and people met, political views, and so on. Facebook collates all this information, which may seem insignificant to an individual user, and then converts it into multiple databases.&lt;br /&gt;&lt;br /&gt;These databases are lucrative, as it helps Facebook target ads at specific individuals or groups. The company earned as much as $40 billion in revenues last year by harvesting ‘worthless’ data. “Data collection at such a granular level is the problem,” said Nikhil Pahwa, Delhi-based digital rights activist and cofounder of Internet Freedom Foundation. “Data once collected is going to get stolen, lost, compromised or sold. Also, the linking of multiple data sets should not be allowed, because, at the end of the day, it has the potential to undermine democracies.”&lt;br /&gt;&lt;br /&gt;The Cambridge Analytica (CA) files have revealed the extent to which tech companies like Facebook and Google profile users. “The fact that micro-targeting of ads were done through Facebook was a known fact,” said Bedasree, copy editor at the education services firm Careers360. “What is dangerous is that data theft can create identical virtual identities, like bots, which is happening. Since everything is digitised we would not be able to differentiate the real from the fake. And, there would be no accountability, because you have given your data to almost everyone.”&lt;br /&gt;&lt;br /&gt;Micro-targeted ads have exposed Facebook to allegations of discrimination.  For instance, a lawsuit filed in the US last year said companies like Amazon and T-Mobile ran recruitment ads in Facebook, allowing only younger workers to see them. “There is a difference between influencing and manipulating people,” said Pranesh Prakash, policy director at the think tank Centre for Internet and Society. “While ‘influencing’ a person politically is something to be celebrated in a democracy, manipulating someone is dangerous…. The problem is not necessarily the content, but the way it is presented: whether it is done transparently and ethically.”&lt;br /&gt;&lt;br /&gt;So, what does the CA scandal mean to users? “They must understand what they are trading for convenience,” said Mishi Choudhary, technology lawyer and legal director at Software Freedom Law Centre, New York. “The technology package, consisting of smartphones and social media companies, peddles a form of convenience that we are all buying into. This convenience ensures that a form of inhuman social control is established, not only in our buying habits, but in our democracy as well.”&lt;br /&gt;&lt;br /&gt;Facebook’s algorithm to determine a user’s newsfeed—the list of updates that a user sees on her Facebook homepage—is a key tool in establishing this control. According to Facebook, the objective of the newsfeed is “to show you the stories that matter the most to you”. It means Facebook determines what a user should see or not see. Studies have shown that Facebook can tweak the algorithm in such a way that only certain types of stories appear in your newsfeed, thereby influencing your mood and behaviour. “The kind of powers that a company like Facebook has, is dangerous,” said Prakash. “Certainly, it is not just Facebook which is problematic in this regard, but all companies with similar business models.”&lt;br /&gt;&lt;br /&gt;In 2016, India campaigned hard and upheld net neutrality in its attempt to stop Facebook's 'Free Basics', a coterie of free web services provided by the social media giant but with controls. Two years later, the data theft scandal, with Facebook at the heart of it, has put the spotlight on India's need for a robust data protection law.&lt;br /&gt;&lt;br /&gt;Last year, the government had appointed a committee of experts under the chairmanship of Justice B.N. Srikrishna to look into the matter. The committee submitted a white paper early this year, which drew criticism from experts for its shortcomings.&lt;br /&gt;&lt;br /&gt;Perhaps, the government should take cues from the current discourse on digital rights. The need of the hour, say experts, is a comprehensive, user-centric data protection law rooted in user consent. The government should hold companies liable for any failure in taking the consent of users and protecting their data.&lt;br /&gt;&lt;br /&gt;The laws should focus on the business model of social media companies, which effectively sell people to advertisers. “The value in digital advertising lies in collecting information about peoples’ behaviour, on a scale previously unimagined in the history of humankind,” said Choudhary. “Gram for gram, the smartphone is the densest collection of sensors ever assembled. It’s a spy satellite in your pocket, aimed at you.”&lt;br /&gt;&lt;br /&gt;Perhaps, the answer lies in building a technology ecosystem that encourages smaller players to take on giants like Facebook. A key to that would be implementing ‘interoperability’ between social networks. Said Hrishikesh Bhaskaran, member of Mozilla India’s Policy and Advocacy Task Force, which works to ensure privacy and data security: “This [interoperability] means that, just like one is able to send mails between Gmail and Yahoo platforms, a user should be able to interact between Facebook and Twitter. There is no technical reason why this cannot be allowed.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/the-week-anita-babu-april-8-2018-it-feeds-on-you'&gt;https://cis-india.org/internet-governance/news/the-week-anita-babu-april-8-2018-it-feeds-on-you&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-10T16:16:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/workshop-on-python">
    <title>Workshop on Python</title>
    <link>https://cis-india.org/internet-governance/events/workshop-on-python</link>
    <description>
        &lt;b&gt;A workshop on Python will be organized at the Centre for Internet &amp; Society (CIS) Bangalore office on April 14, 2018. &lt;/b&gt;
        &lt;p&gt;The workshop will be conducted by Bharath Kumar, who works at AppSecCo, Bangalore. He is also volunteering with CIS on the Cyber Security project. &lt;span&gt;Those of you who intend on attending the workshop, please fill up this short questionnaire by Thursday, as Bharath will be using the responses &lt;/span&gt;&lt;span&gt;to finalise the content for the workshop. &lt;a class="external-link" href="https://bharathkumar7.typeform.com/to/JjWE1w"&gt;Fill the questionnaire here&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/workshop-on-python'&gt;https://cis-india.org/internet-governance/events/workshop-on-python&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Python</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-10T14:59:29Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook">
    <title>Government gives free publicity worth 40k to Twitter and Facebook </title>
    <link>https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook</link>
    <description>
        &lt;b&gt;We conducted a 2 week survey of newspapers for links between government advertisement to social media giants. As citizens, we should be worried about the close nexus between the Indian government and digital behemoths such as Facebook, Google and Twitter. It has become apparent to us after a 2 week print media analysis that our Government has been providing free publicity worth Rs 40,000 to these entities. There are multiple issues with this as this article attempts at pointing out.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;&lt;img src="https://cis-india.org/home-images/TotalAdvertisementExpenditure.jpg" alt="null" class="image-inline" title="Total Advertisement Expenditure" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We analyzed 5 English language newspapers daily for 2 weeks from March 12&lt;sup&gt;th&lt;/sup&gt; to 26&lt;sup&gt;th&lt;/sup&gt;, one week of the newspapers in Lucknow and the second week in Bangalore. Facebook, Twitter, Instagram and Alphabet backed services such as Youtube and Google Plus were part of our survey. Of a total of 33 advertisements (14 in Lucknow+19 in Bangalore), Twitter stands out as the most prominent advertising platform used by government agencies with 30 ads but Facebook at 29 was more expensive. In order to ascertain the rates of publicity, current advertisement rates for Times of India as our purpose was to solely give a rough estimation of how much the government is spending.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Advertising of this nature is not merely an inherent problem of favoring some social media companies over others but also symptomatic of a bigger problem, the lack of our native e-governance mechanisms which cause the Government to rely and promote others. Where we do have guidelines they are not being followed. By outsourcing their e-governance platforms to Twitter such as TwitterSeva, a feature created by the Twitter India team to help citizens connect better with government services, there is less of an impetus to construct better &lt;a class="external-link" href="https://factordaily.com/twitter-helping-india-reboot-public-services-publicly/"&gt;websites of their own&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;If this is so because we currently do not have the capacity to build them ourselves then it is imperative that this changes. We should either be executing government functions on digital infrastructure owned by them or on open and interoperable systems. If anything, the surveyed social media platforms can be used to enhance pre-existing facilities. However, currently the converse is true with these platforms overshadowing the presence of e-governance websites. Officials have started responding to complaints on Twitter, diluting the significance of such complaint mechanisms on their respective department’s portal. Often enough such features are not available on the relevant government website. This sets a dangerous precedent for a citizen management system as the records of such interactions are then in the hands of these companies who may not exist in the future. As a result, they can control the access to such records or worse tamper with them. Posterity and reliability of such data can be ensured only if they are stored within the Government’s reach or if they are open and public with a first copy stored on Government records which ensures transparency as well. Data portability is an important facet to this issue as well as being a right consumers should possess. It provides for support of many devices, transition to alternative technologies and lastly, makes sure that all the data like other public records will be available upon request through the Right to Information procedure. The last is vital to uphold the spirit of transparency envisioned through the RTI process since interactions of government with citizens are then under its ambit and available for disclosure for whomsoever concerned.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Secondly, such practices by the Government are enhancing the monopoly of the companies in the market effectively discouraging competition and eventually, innovation. While a certain elite strata of the population might opt for Twitter or Facebook as their mode of conveying grievance, this may not hold true for the rest of the online India population.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Picking players in a free market is in violation of technology and vendor neutrality, a practice essential in e-governance to provide a level playing field for all and competing technologies. Projecting only a few platforms as de facto mediums of communication with the government inhibits the freedom of choice of citizens to air their grievances through a vendor or technology they are comfortable with. At the same time it makes the Government a mouthpiece for such companies who are gaining free publicity and consolidating their popularity. Government apps such as the SwachBharat one which is an e-governance platform do not offer much more in terms of functionality but either reflect the website or are a less mature version of the same. This leads to the problem of fracturing with many avenues of complaining such as the website, app, Twitter etc. Consequently, the priority of the people dealing with the complaints in terms of platform of response is unsure. Will I be responded to sooner if I tweet a complaint as opposed to putting it up on the app? Having an interoperable system can solve this where the Government can have a dashboard of their various complaints and responses are then made out evenly. Twitter itself could implement this by having complaints from Facebook for example and then the Twitter Seva would be an equal platform as opposed to the current issue where only they are favored.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Recent events have illustrated how detrimental the storage of data by these giants can be in terms of privacy. Data security concerns are also a consequence of such leaks. Not only is this a long overdue call for a better data protection law but at the same time also for the Government to realize that these platforms cannot be trusted. The hiring of Cambridge Analytica to influence voters in the US elections, based on their Facebook profiles and ancillary data, effectively put the governance of the country on sale by exploiting these privacy and security issues. By basing e-governance on their backbone, India is not far from inviting trouble as well. It is unnecessary and dangerous to have a go-between for matters that pertain between an individual and state.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;As this article was being written, it was confirmed by the Election Commission that they are partnering with Facebook for the Karnataka Assemby Elections to promote activities such as encourage enrollment of Voter ID and voter participation. Initiatives like these tying the government even closer to these companies are of concern and cementing the latter’s stronghold.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;&lt;em&gt;Note: Our survey data and results are attached to this post. All research was collected by Shradha Nigam, a Vth year student at NLSIU, Bangalore.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 style="text-align: justify;"&gt;Survey Data and Results&lt;/h3&gt;
&lt;p style="text-align: justify;"&gt;This report is based on a survey of government advertisements in English language newspapers in relation to their use of social media platforms and dedicated websites (“&lt;strong&gt;Survey&lt;/strong&gt;”). For the purpose of this report, the ambit of the social media platforms has been limited to the use of Facebook, Twitter, YouTube, Google Plus and Instagram. The report was prepared by Shradha Nigam, a student from National Law School of India University, Bangalore. &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/cis-report-on-social-media"&gt;Read the full report here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook'&gt;https://cis-india.org/internet-governance/blog/government-giving-free-publicity-worth-40-k-to-twitter-and-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Akriti Bopanna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Google</dc:subject>
    
    
        <dc:subject>Instagram</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Twitter</dc:subject>
    
    
        <dc:subject>YouTube</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Google Plus</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    

   <dc:date>2018-04-27T09:52:26Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook">
    <title>Digital Native: Delete Facebook?</title>
    <link>https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook</link>
    <description>
        &lt;b&gt;You can check out any time you like, but you can never leave.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="http://indianexpress.com/article/technology/social/digital-native-delete-facebook-5127198/"&gt;published in Indian Express&lt;/a&gt; on April 8, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One fine day, we all woke up and were told that &lt;/span&gt;&lt;a href="http://indianexpress.com/about/facebook/"&gt;Facebook&lt;/a&gt;&lt;span&gt; sold our data to Cambridge Analytica and then they made dastardly profiles of us to target us with advertisement and political propaganda, so, we made a beeline for #DeleteFacebook. The most surprising part about the expose is how much of a non-event it is. We have been warned, at least since the Edward Snowden revelations, if not earlier, that our data is the new oil, coal and gold. It is being used as a resource, it is being mined from our everyday digital transactions, and it is precious because it can result in a massive social engineering without our consent or knowledge. Ever since Facebook started expanding its domain from being a friends-poke-friends-with-livestock website, we have been warned that the ambition of Facebook was never to connect you with your friends but to be your friend.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;Time and again, we have been told that the sapient Facebook algorithm remembers everything you say and do, anticipates all your future needs, and listens to the most banal litany of your life. More than your mom, your partner or your shrink, it’s the Facebook algorithm which is interested in all your quotidian uselessness. It is not the stranger who accesses your post that should worry you. The biggest perpetrator of privacy violations on Facebook is Facebook itself. There is good reason why a company that offers its prime products for free is valuated as one of the richest corporations in the world. The product of Facebook – it has always been known – is us.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;Why, then, are we suddenly taken aback at the fact that Facebook sold us? And while we are sharing our thoughts (ironically on Facebook) about deleting our profiles, the question that remains is this: How much of your digital life are you willing to erase? Because, and I am sorry if this pricks your filter bubble, Facebook’s problem is not really a Facebook problem. It is almost the entire World Wide Web, where we lost the battle for data ownership and platform openness more than two decades ago. Name one privately owned free service that you use on the internet and I will show you the section in its “terms and services” where you have surrendered your data. In fact, you can’t even find government services, tied up with their private partners, where your data is safe and stored in privacy vaults where it won’t be abused.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;It is time to realise that the popular ’90s meme “All your base are belong to us” is the lived reality of our digital lives. As we forego ownership for convenience, as our governments sold our sovereignty for profits, and as digital corporations became behemoths that now have the capacity to challenge and write our constitutional and fundamental rights, we are waking up to a battle that has already been fought and resolved. A large part of our physical hardware to access the internet is privately owned. This means that almost all our PCs, tablets, phones, servers are owned and open to exploitation by private companies. Every time your phone does an automatic update or your PC goes into house-cleaning mode, you have to realise that you are being stored, somewhere in the cloud in ways that you cannot imagine.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;&lt;span&gt;It is tiring to hear this alarm and panic around Facebook’s data trading. Not only is it legal, it is something that has been happening for a while, most of us have been aware of it, and we have resolutely ignored it because, you know, cute cats. If somebody tells you that they are against privately owned physical property and are going to start a revolution to take away all private property and make it equally shared with the public, you would laugh at them because they are arriving at the battle scene after the war is over. This digital wokeness trend to #DeleteFacebook is the digital equivalent of that moment. If you want to fight, fight the governments and nations who can still protect us. Participate in conversations around Internet governance. Take responsibility to educate yourself about the politics of how the digital world operates. But stop trying to feel virtuous because you pulled out of a social media network, pretending that that is the end of the problem.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook'&gt;https://cis-india.org/raw/indian-express-nishant-shah-april-8-2018-digital-native-delete-facebook&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>nishant</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    

   <dc:date>2018-05-06T03:08:25Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/hindu-businessline-april-6-2018-govt-websites-face-major-outage-hacking-ruled-out">
    <title>Govt websites face major outage; hacking ruled out</title>
    <link>https://cis-india.org/internet-governance/news/hindu-businessline-april-6-2018-govt-websites-face-major-outage-hacking-ruled-out</link>
    <description>
        &lt;b&gt;Defence Minister orders probe.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="https://www.thehindubusinessline.com/news/national/govt-websites-face-major-outage-hacking-ruled-out/article23459793.ece"&gt;Hindu Businessline&lt;/a&gt; on April 6, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In a sudden outage on Friday, a few key government websites went down, sending officials into a tizzy as rumours of a widespread hacking of portals created panic across the corridors of power.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Ministry of Defence website was the first to go down, with Chinese characters being displayed on the portal’s homepage. Thereafter, one after another, the websites of the Ministries of Home Ministry, Law and Labour and of Central Bureau of Investigation (CBI) went down.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;All the sites were restored by late evening. Late in the day, the National Informatics Centre confirmed that the sites were not hacked. “The site showed what appeared to be a Chinese character and it was understandable that the site was perceived to be hacked . However, it has since been identified that the sites have not been hacked,” an NIC release said.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;
&lt;p&gt;‘Technical snag’&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;div class="article-img"&gt;&lt;img class="placeholder adaptive media-object" src="https://www.thehindubusinessline.com/incoming/article23460003.ece/alternates/FREE_615/hackingjpg" title="hackingjpg" /&gt;&lt;/div&gt;
&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the IT Ministry tried to downplay the issue and said that the websites had not been hacked, and that it was a “technical snag”, Defence Minister Nirmala Sitharaman said she had ordered a probe into the matter, hinting that it may have been a case of hacking.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Action is initiated after the hacking of MoD website (http://mod.nic.in). The website shall be restored shortly. Needless to say, every possible step required to prevent any such eventuality in the future will be taken,” Sitharaman said in a tweet.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is not first time that Indian government websites faced an outage. The government had informed the Lok Sabha earlier this year that over 700 websites linked to the Central and State governments were hacked in the past four years. In February last year, the website of the Ministry of Home Affairs was hacked.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Compromising a government website is a low-value attack, but results in a big win for the attackers in the battle over perception,” Sunil Abraham, Executive Director, Centre for Internet and Society told &lt;em&gt;BusinessLine&lt;/em&gt;. “This usually happens because the server administrator has not configured the software stack properly or is not installing all the security updates in a timely fashion.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/hindu-businessline-april-6-2018-govt-websites-face-major-outage-hacking-ruled-out'&gt;https://cis-india.org/internet-governance/news/hindu-businessline-april-6-2018-govt-websites-face-major-outage-hacking-ruled-out&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-07T16:17:55Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/hindustan-times-vidhi-choudhary-and-yashwant-raj-facebook-data-breach-hit-over-5-6-lakh-users-in-india">
    <title>Cambridge Analytica row: Facebook data breach hit 560K Indian users</title>
    <link>https://cis-india.org/internet-governance/news/hindustan-times-vidhi-choudhary-and-yashwant-raj-facebook-data-breach-hit-over-5-6-lakh-users-in-india</link>
    <description>
        &lt;b&gt;Facebook said that data and information of 87 million users globally were compromised.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Vidhi Choudhury and Yashwant Raj was published in the &lt;a class="external-link" href="https://www.hindustantimes.com/tech/facebook-data-breach-hit-over-5-6-lakh-users-in-india/story-S3bafNwwKTtO5q6U7S4FZM.html"&gt;Hindustan Times&lt;/a&gt; on April 5, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;User data of more than 560,000 Indians may have been harvested from Facebook Inc. by British researcher Cambridge Analytica, at the centre of a recent storm over data breaches and potential privacy violations on the social media network.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Only 335 users in India installed the thisisyourdigitallife app developed by academic Aleksandr Kogan and his company Global Science Research Ltd that may have been possibly at the centre of the data breaches, according to Facebook.. The 335 people make up just 0.1% of the app’s total worldwide installs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Users agreed to take a personality test and have their data collected by the app, which then went on to also access information about the test-takers’ Facebook friends, leading to the accumulation of a much larger data pool. “ We further understand that 562,120 additional people in India were potentially affected, as friends of people who installed the App. This yields a total of 562,455 potentially affected people in India, which is 0.6% of the global number of potentially affected people,” a Facebook spokesperson said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This week, Facebook said data on as many as 87 million people, most of them in the US, may have been improperly shared with Cambridge Analytica, increasing the figure from a previously estimated 50 million.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Protecting people’s information is at the heart of everything we do, and we require the same from people who operate apps on Facebook. Cambridge Analytica’s acquisition of Facebook data through the app developed by Dr. Aleksandr Kogan and his company Global Science Research Limited (“GSR”) happened without our authorization and was an explicit violation of our Platform policies. At no time did Facebook agree to Cambridge Analytica’s use of any Facebook user data that may have been collected by this app, including with respect to users located in India,” the company spokesperson said n an emailed response.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This app first became active on Facebook in November 2013. Facebook removed the app in 2015 when it learnt of violations of its platform policies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India is a key market for Facebook with 217 million people using the platform every month.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Details of the number of users in India whose data was compromised was shared by Facebook as part of its response to the government of India. On 28 March, the Ministry of Electronics and Information Technology sent a letter to Facebook asking if data of Indian voters and users had been compromised by Cambridge Analytica or any other affiliate. The government also asked what proactive measures were being taken to ensure the safety, security and privacy of such large user data and to prevent its misuse by any third party. Facebook was asked to respond to the questions by April 7.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Facebook’s chief technology officer Mike Schroepfer said in a blog that ran under his byline on the company’s website that the number of subscribes whose data was shared with the controversial firm was much higher at 87 million than the 50 million it had conceded earlier, “mostly” in the United States.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Subscribers in the Philippines, Indonesia, UK, Mexico and Canada were ahead of Indians, and behind American, in the list of Facebook’s new revelations about compromised information.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These details came ahead of a conference call with reporters in which Facebook CEO Mark Zuckerberg said he had made a “huge mistake” personally by not focussing on data privacy. Zuckerberg also faced questions for the first time about his suitability to run the company he founded as a college student — dropped out of Harvard. He answered in the affirmative, but questions are beginning to be raised.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Sunil Abraham, founder of the think tank Centre for Internet and Society, said the thisisyourdigitallife app was one of the many apps that access Facebook’s application programming interface. “What Facebook isn’t telling us yet is what are the other apps , (whether they) had the same modus operandi and how much data did they manage to scrape,” he added. The company said on April 4 that it would display a link on the top of users’ News Feed so they can see what apps they use and the information they have shared with those apps. Facebook will also tell people if their information may have been improperly shared with Cambridge Analytica.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/hindustan-times-vidhi-choudhary-and-yashwant-raj-facebook-data-breach-hit-over-5-6-lakh-users-in-india'&gt;https://cis-india.org/internet-governance/news/hindustan-times-vidhi-choudhary-and-yashwant-raj-facebook-data-breach-hit-over-5-6-lakh-users-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-04-07T16:01:10Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk">
    <title>It Took Just 355 Indians to Mine the Data of 5.6 Lakh Facebook Users. Here's How</title>
    <link>https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk</link>
    <description>
        &lt;b&gt;Data privacy in India is still a nascent subject. Experts say cheap data has led to unprecedented Facebook penetration. Often, it is seen that those who open an account are not aware of the privacy concerns.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Subhajit Sengupta was published in &lt;a class="external-link" href="https://www.news18.com/news/india/how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk-1710845.html"&gt;CNN-News 18&lt;/a&gt; on April 7, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Over 5.6 lakh Indian Facebook profiles have allegedly been compromised and their data leaked to the controversial data analytics firm Cambridge Analytica. As per the company, only 335 people in India installed the App yet they managed to penetrate over half a million profiles. &lt;br /&gt;&lt;br /&gt;So, how does this work?&lt;br /&gt;&lt;br /&gt;Once a user downloaded the quiz app called “thisisyourdigitallife”, Global Science Research Limited got access to the entire treasure trove of data. There are two mechanisms which are used for this.&lt;br /&gt;&lt;br /&gt;First, the Application Program Interface (API) of Facebook called ‘Social Graph’ allows any app to harvest the entire contact list and everything else that could be seen on a users’ friend’s profile. This would take place even for private profiles, says Sunil Abraham, Executive Director of Bangalore based research organization ‘Centre for Internet and Society’.&lt;br /&gt;&lt;br /&gt;The second way is when users have a public profile. The algorithm seeks out public profiles from the friend list and would go on multiplying from one public profile to another without any of the users even coming to know what is happening. This is like the ‘True Caller’ application, for it to get your number, you don’t need to download the software. If anyone has the app and your number, then it gets automatically logged there.&lt;br /&gt;&lt;br /&gt;Facebook says "Cambridge Analytica’s acquisition of Facebook data through the app developed by Dr Aleksandr Kogan and his company Global Science Research Limited (GSR) happened without our authorisation and was an explicit violation of our Platform policies." &lt;br /&gt;&lt;br /&gt;GSR continued to access this data from all the Facebook profiles throughout the entire lifespan of the app on the Facebook platform, which was roughly two years between 2013 and 2015. This means, even if a user is careful enough to not download the application but his/her profile’s privacy settings are weak, the algorithm would infiltrate the data bank.&lt;br /&gt;&lt;br /&gt;Amit Dubey, a Cyber Security Expert goes into the details of what the app did, “The app called 'thisisyourdigitallife', which was created for research work by Aleksandr Kogan, was eventually used for psychometric profiling of users and then manipulating their political biases. The app was offered to users on the pretext to take a personality test and it agreed to have their data collected for academic use only. But the app has exploited a security vulnerability of Facebook application.”&lt;br /&gt;&lt;br /&gt;Facebook “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it from being sold or used for advertising. &lt;br /&gt;&lt;br /&gt;But this kind of data scrapping is not just limited to Cambridge Analytica. The Social Media Algorithm is often abused in the world of data scavenging and analytics. Even law enforcement agencies have often used similar means to locate possible miscreants. &lt;br /&gt;&lt;br /&gt;According to Shesh Sarangdhar, Chief Executive Officer in Seclabs &amp;amp; Systems Pvt Ltd, similar data scrapping helped them unearth the terror module behind one of the attacks at an airbase last year. Shesh said that through Social Media Algorithm they would often narrow down on unknown terror modules. What his team did was to connect to the profile the whereabouts of multiple known nods converging. That is how the mastermind was located.&lt;br /&gt;&lt;br /&gt;Data privacy in India is still a nascent subject. Experts say cheap data has led to unprecedented Facebook penetration. &lt;br /&gt;&lt;br /&gt;Often, it is seen that those who open an account are not aware of the privacy concerns. But as Sunil Abraham puts it, Caveat emptor or ‘Let the Buyers Beware’ does not even apply here. It is not possible for anyone to go through the entire privacy policy. &lt;br /&gt;&lt;br /&gt;“So it is not even right to ask if the consumer can protect his/her own interest. Thus, the state should proactively regulate the industry,” said Abraham.&lt;br /&gt;&lt;br /&gt;Facebook has brought in a number of changes to its privacy settings. It now allows you to remove third-party apps in bulk. This welcome change has come after sustained pressure on the tech giant from users and a number of regulatory bodies across the world.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk'&gt;https://cis-india.org/internet-governance/news/news-18-subhajit-sengupta-how-just-355-indians-put-data-of-5-6-lakh-facebook-users-at-risk&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-07T15:33:46Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access">
    <title>After data leak row, Facebook imposes restrictions on user data access</title>
    <link>https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access</link>
    <description>
        &lt;b&gt;MEIT issues notice to Facebook even as experts debate absolute impact on the second largest developer community.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Romita Majumdar and Kiran Rathee was published in &lt;a class="external-link" href="http://www.business-standard.com/article/current-affairs/after-data-leak-row-facebook-imposes-restrictions-on-user-data-access-118040500950_1.html"&gt;Business Standard&lt;/a&gt; on April 6, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Social media giant &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has finally reacted to the global storm around its data privacy policies by bringing in a new set of restrictions on developers and data aggregators using the platform for data harvesting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Two weeks ago we promised to take a hard look at the information apps can use when you connect them to &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;as well as other data practices. We will remove a developer’s ability to request data people shared with them if it appears they have not used the app in the last 3 months,” said &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;Chief Technology Officer &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=mark+schroepfer" target="_blank"&gt;Mark Schroepfer &lt;/a&gt;in a blog.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;iframe frameborder="0" height="1" marginheight="0" marginwidth="0" scrolling="no" title="3rd party ad content" width="1"&gt;&lt;/iframe&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;&lt;span&gt;has also disabled the feature to search a user by their email address or phone number which has been abused by malicious actors and reduced the overall control that the app will have on user data.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has also submitted its response to the Indian government saying over 500,000 people in India have been potentially affected by the data breach involving &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica.&lt;/a&gt; The government sources said as the social networking firm has now accepted that Indians’ data was compromised; it makes the issue much more important and serious. “We will wait for Cambridge Analytica’s reply and then, we will take our stand,” sources in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=electronics" target="_blank"&gt;Electronics &lt;/a&gt;and IT Ministry said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Ministry had issued notices to both &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;and Cambridge Analytica, seeking their responses regarding the data breach of Indians and if it was used to influence elections.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The new set of restrictions clamp down on how much data app developers access on the platform and also prevent third part data providers from offering targeted marketing services on &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"India is the second largest &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;developer base and the restriction on users' data access is going to impact all of them. There will be more scrutiny in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;apps, leading to slower approvals. Virality will reduce as explicit consent will be required for accessing friends' data and contacts list, “ said Vivek Prakash, CTO and Co-Founder, HackerEarth.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He added that there could be tighter terms of service making developers also liable for unauthorized processing of data that they collect from the apps.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Executive Director of Center for Internet and Society Sunil Abraham says that while &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;says “apps need to agree to strict requirements” and “tightening our review process” it is still not clear what these requirements are. “Instead of the promised link to whether user data was accessed by Cambridge Analytica, it would make sense for them to say &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;holds W number of records across X databases over the time period Y, which totals Z Gb while explaining what these variables stand for,” he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Consumer data marketing company Hansa Cequity believes that digital marketing arms of most companies will finally have to consider building their own user database given the strict clampdown on third party data.“Businesses can no more use data from third party aggregators for targeted advertising. Consumer goods and entertainment related brands are likely to face some impact because they depend on access to such data,” said S Swaminathan, Co-Founder and CEO, Hansa Cequity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some experts also believe that this move might force platforms like Twitter, &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=google" target="_blank"&gt;Google &lt;/a&gt;and &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=youtube" target="_blank"&gt;YouTube &lt;/a&gt;to rethink their policies on how much access they give advertisers and data aggregators to user data. Abraham also added that app developers and their investors have to evaluate business models that depend more on value to user rather than the amount of personal data harvested. The data that has already been harvested by the likes of &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica &lt;/a&gt;and other unknown parties, however, is beyond user control forever.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access'&gt;https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-07T15:30:31Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
