<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/internet-governance/blog/online-anonymity/search_rss">
  <title>We are anonymous, we are legion</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 2701 to 2715.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/first-post-politics-venky-vembu-nov-20-2012-arrests-over-facebook-posts-why-were-on-a-dangerous-slide"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/ndtv-news-oct-31-2012-arrested-for-tweeting-legitimate-or-curbing-free-speech"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/first-post-india-nov-19-2012-arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/medianama-nikhil-pahwa-may-4-2017-around-130-135-m-aadhaar-numbers-published-on-four-sites-alone"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/scroll-may-2-2017-around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/are-we-throwing-our-data-protection-regimes-under-the-bus"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/dna-amber-sinha-march-10-2016-are-we-losing-right-to-privacy-and-freedom-of-speech-on-indian-internet"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/economic-times-february-20-2019-are-rss-fears-about-tik-tok-true"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-october-10-2018-anila-kurian-are-online-shows-obscene"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/livemint-november-22-2018-abhijit-ahaskar-are-connected-tech-toys-too-smart-for-their-own-good"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi">
    <title>Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</link>
    <description>
        &lt;b&gt;This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.&lt;/b&gt;
        &lt;p&gt;&lt;span&gt;Event Report: &lt;/span&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance"&gt;Download&lt;/a&gt;&lt;span&gt; (PDF)&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation  from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Meaning and Scope of AI&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;AI Sectoral Applications&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Human Involvement with Automated Decision Making&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;The Social and Power Relations Surrounding AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Regulatory Approaches to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Challenges to Adopting AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p id="_mcePaste"&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;div&gt;&lt;/div&gt;
&lt;p style="text-align: justify; "&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency  and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;[1]. Automated decision making model where final decisions are made by a human operator&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[3]. A completely autonomous decision making model requiring no human involvement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[4]. https://futureoflife.org/ai-principles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Saman Goudarzi and Natallia Khaniejo</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-03T15:49:40Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation">
    <title>Artificial Intelligence for India's Transformation</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation</link>
    <description>
        &lt;b&gt;ASSOCHAM's 3rd International Conference was organized at Hotel Imperial in New Delhi. Amber Sinha a session on use, impact and ethics in AI. &lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-in-ethics-agenda/view"&gt;view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-20T01:38:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation">
    <title>Artificial Intelligence for Growth: Leveraging AI and Robotics for India's Economic Transformation</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation</link>
    <description>
        &lt;b&gt;Amber Sinha took part in the second international conference organized by ASSOCHAM at Hotel Shangri-La in New Delhi on April 27, 2018.&lt;/b&gt;
        &lt;h3&gt;Keynote Address&lt;/h3&gt;
&lt;p&gt;12.15 p.m. - 12.30 p.m.: Shri Gopalakrishnan S., Joint Secretary, Ministry of Electronics and IT, Government of India&lt;/p&gt;
&lt;h3&gt;Special Address&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;12.30 p.m. - 12.45 p.m.: Dr. Pushpak Bhattacharyya, Director and Professor, Computer Science and Engg, IIT Patna and Chairman, BIS Committee for Standardisation in Artificial Intelligence&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Panel Discussion&lt;/h2&gt;
&lt;h3&gt;Session Moderator&lt;/h3&gt;
&lt;p&gt;12.45 p.m. - 1.40 p.m.: Shri Sudipta Ghosh, India                         Leader, Data and Analytics, PwC&lt;/p&gt;
&lt;h3&gt;Panelists&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Shri                           Amber Sinha, Senior Programme Manager, Centre                           for Internet and Society&lt;/li&gt;
&lt;li&gt;Shri                           Utpal Chakraborty, Lead Architect - AI,                           L&amp;amp;T Infotech &lt;/li&gt;
&lt;li&gt;Shri                           Atul Rai, CEO &amp;amp; Co-Founder, Staqu                           Technologies&lt;/li&gt;
&lt;li&gt;Shri                           Prabhat Manocha, IBM&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-05T09:08:07Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative">
    <title>Artificial Intelligence and Data Initiative</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative</link>
    <description>
        &lt;b&gt;On 3 May 2019 Arindrajit Basu attended a meeting of the Artificial Intelligence and Data Initiative held at IIC in Delhi. I am a member of the Working Group and co-authoring a report with Anindya Chaudhuri of Global Development Network on the prospect of collaborations in Public uses of AI.&lt;/b&gt;
        &lt;p&gt;The agenda can be &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-and-data-initiative"&gt;viewed here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-05-14T15:06:02Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review">
    <title>Artificial Intelligence - Literature Review</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review</link>
    <description>
        &lt;b&gt;With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. However, interest in AI has been rekindled over the last few years, in no small measure due to the rapid advancement of the technology and its applications to real- world scenarios. In order to create policy in the field, understanding the literature regarding existing legal and regulatory parameters is necessary. This Literature Review is the first in a series of reports that seeks to map the development of AI, both generally and in specific sectors, culminating in a stakeholder analysis and contributions to policy-making. This Review analyses literature on the historical development of the technology, its compositional makeup, sector- specific impacts and solutions and finally, overarching regulatory solutions.&lt;/b&gt;
        &lt;p&gt;Edited by Amber Sinha and Udbhav Tiwari; Research Assistance by Sidharth Ray&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. With an increasing number of real-world implications over the last few years, however, interest in AI has been reignited over the last few years.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The rapid and dynamic pace of development of AI have made it difficult to predict its future path and is enabling it to alter our world in ways we have yet to comprehend. This has resulted in law and policy having stayed one step behind the development of the technology.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Understanding and analyzing existing literature on AI is a necessary precursor to subsequently recommending policy on the matter. By examining academic articles, policy papers, news articles, and position papers from across the globe, this literature review aims to provide an overview of AI from multiple perspectives.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The structure taken by the literature review is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Overview of historical development&lt;/li&gt;
&lt;li&gt;Definitional and compositional analysis&lt;/li&gt;
&lt;li&gt;Ethical &amp;amp; Social, Legal, Economic and Political impact and sector-specific solutions&lt;/li&gt;
&lt;li&gt;The regulatory way forward&lt;/li&gt;
&lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;This literature review is a first step in understanding the existing paradigms and debates around AI before narrowing the focus to more specific applications and subsequently, policy-recommendations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-literature-review"&gt;&lt;b&gt;Download the full literature review&lt;/b&gt;&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shruthi Anand</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-12-18T15:12:52Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/first-post-politics-venky-vembu-nov-20-2012-arrests-over-facebook-posts-why-were-on-a-dangerous-slide">
    <title>Arrests over Facebook posts: Why we’re on a dangerous slide</title>
    <link>https://cis-india.org/news/first-post-politics-venky-vembu-nov-20-2012-arrests-over-facebook-posts-why-were-on-a-dangerous-slide</link>
    <description>
        &lt;b&gt;The most bizarre thing about the arrest of Shaheen Dhada and Renu Srinivasan on Monday over  a Facebook post that questioned the wisdom of a bandh to mark Shiv Sena leader Bal Thackeray‘s death is that no laws were actually violated by the post.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p&gt;Venky Vembu's &lt;a class="external-link" href="http://www.firstpost.com/politics/arrests-over-facebook-posts-why-were-on-a-dangerous-slide-528537.html"&gt;article was published in FirstPos&lt;/a&gt;t on November 20, 2012. Pranesh Prakash is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In tone and in content, the post is remarkably restrained, particularly when compared to the rather more incendiary messages that  are commonplace on social media platforms. Nor was it even halfways defamatory in the way that many rants on Twitter and Facebook have unfortunately come to be.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Yet, the Mumbai police appear to have cravenly capitulated in the face of some arm-twisting by a local Sena strongman and gone ahead to arrest the two young women on charges that seem laughable even given the extraordinarily sweeping, catch-all clauses of the Information Technology Act.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is hard to see how Shaheen Dhada violated the two sections of the law under which she has been charged – Section 295A of the Indian Penal Code (“outraging religous feelings of any class”) or even the draconian Section 66A of the IT Act (“sending offensive messages through communication service, etc.”) – with her contemplative post, or what crimes Renu Srinivasan committed in merely ‘liking’ the post.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But it is a sign of the disquieting nature of the provision of the law, and the perverse manner in which it is being implemented, that there weren’t adequate checks and balances to inhibit the wilful deployment  of the law on such frivolous grounds. Ironically, the goons who actually wrecked the clinic of Dhada’s uncle haven’t been called to account.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;If that is bad enough, it is doubly perverse  for Kapil Sibal to claim in all innocence that he is “deeply saddened” by the arrest of the two young women and to insinuate that the IT Act, which he was instrumental in passing, was being misused on grounds of improper implementation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The fact of it is that the IT Act that he fathered, and particularly &lt;a href="https://cis-india.org/internet-governance/resources/section-66A-information-technology-act" target="_blank"&gt;the notorious Section 66A&lt;/a&gt;, was deliberately worded to give maximum potential for mischief. There have been far too many egregious instances of its misuse by discredited governments and politicians for Sibal to claim that these are random incidents of misuse of the law. Just last month, Finance Minister P Chidambaram’s son Karti had a Puducherry businessmen and anti-corruption activist hauled up by the police for a Twitter post in which the businessman alleged that Karti had “amassed more wealth” than &lt;a href="http://www.firstpost.com/topic/person/sonia-gandhi-profile-2030.html" target="_self"&gt;Sonia Gandhi&lt;/a&gt;‘s son-in-law.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It’s important to get a sense of why the latest arrests take us further on the slippery slope towards curtailing free speech. Justice Markandeya Katju has repeatedly pointed to the egregious encroachment on the freedom of speech by this provision of law, and has been vocal in calling both  politicians and policemen to account whenever the law is abused in this manner.&lt;/p&gt;
&lt;p&gt;“It is absurd to say that protesting against the bandh hurts religious sentiments,” Katju observed in a letter to the Maharashtra Chief Minister. “Under Article 19 of our Constitution, freedom of speech is guaranteed fundamental right. We are living in a democracy, not a fascist dictatorship.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;If anything, Katju argued, “this arrest itself appears to be a criminal act since under Sections 341 and 342, it is a crime to wrongfully arrest or wrongfully confine someone who has committed no crime.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As Pranesh Prakash at the Centre for Internet and Society &lt;a href="http://kafila.org/2012/11/19/social-media-regulation-vs-suppression-of-freedom-of-speech-pranesh-prakash/" target="_blank"&gt;points out&lt;/a&gt;, in the context of Monday’s arrests, “This should not be seen merely as ‘social media regulation’, but as a restriction on freedom of speech and expression by both the law and the police.” Section 66A, he says, makes certain kinds of speech-activities (“causing annoyance”) illegal if communicated online, but legal if that same speech-activity is published in a newspaper.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This distinction is important, Prakash notes, since the mere fact that it was a Facebook status update “should not grant Shaheen Dhada any special immunity”. If anything, it is the fact that her update is not  punishable under Section 295 of the IPC or of Section 66A of the IT Act that should give her the immunity, he adds.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With each instance in which Section 66A of the IT Act is being invoked, the potential for mischief embedded in the law is being exposed. Monday’s arrests – of two young women for crimes they did not even commit – are the most brazen instance of their abuse.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Of course, the perverse provision of law has been abused in the real world through selective and arbitrary invocation of the law. But the original sin lies in the law itself. It is the most potent threat to free speech online, and if the law isn’t amended to throw out these perverse provisions, India can kiss goodbye to any lingering pretensions to being a democracy of any sort.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/first-post-politics-venky-vembu-nov-20-2012-arrests-over-facebook-posts-why-were-on-a-dangerous-slide'&gt;https://cis-india.org/news/first-post-politics-venky-vembu-nov-20-2012-arrests-over-facebook-posts-why-were-on-a-dangerous-slide&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Public Accountability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    

   <dc:date>2012-11-20T11:47:43Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/ndtv-news-oct-31-2012-arrested-for-tweeting-legitimate-or-curbing-free-speech">
    <title>Arrested for tweeting: Legitimate or Curbing Free Speech?</title>
    <link>https://cis-india.org/news/ndtv-news-oct-31-2012-arrested-for-tweeting-legitimate-or-curbing-free-speech</link>
    <description>
        &lt;b&gt;As a man in Puducherry is arrested for allegedly posting on Twitter that MR Chidambaram's son had amassed wealth more than that of Robert Vadra, we discuss whether freedom of speech is absolute. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Sunil Abraham along with Shivam Vij, Journalist and Blogger, SB Mishra, Additional DCP, Census Wing, Economic Offence Wing, Delhi Police, and Sanjay Pinto, Advocate, Madras High Court participated in this discussion aired in NDTV on October 31, 2012.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://www.ndtv.com/video/player/news/arrested-for-tweeting-legitimate-or-curbing-free-speech/253035"&gt;Watch the full video on NDTV&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/ndtv-news-oct-31-2012-arrested-for-tweeting-legitimate-or-curbing-free-speech'&gt;https://cis-india.org/news/ndtv-news-oct-31-2012-arrested-for-tweeting-legitimate-or-curbing-free-speech&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Video</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2012-11-02T06:09:57Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/first-post-india-nov-19-2012-arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a">
    <title>Arrest of girl over Thackeray FB update a clear misuse of Sec 295A</title>
    <link>https://cis-india.org/news/first-post-india-nov-19-2012-arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a</link>
    <description>
        &lt;b&gt;The arrest of 21-year-old Shaheen Dhada over her Facebook status update questioning the shutdown of Mumbai over Shiv Sena supremo Bal Thackeray‘s death, is a clear misapplication of section 295 A of the Indian Penal Code (“outrage religious feelings of any class”), according to Pranesh Prakash of the Centre for Internet and Society.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="http://www.firstpost.com/india/arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a-527779.html"&gt;published in FirstPost &lt;/a&gt;on November 19, 2012. Pranesh Prakash is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In comments to Firstpost, Prakash said that this law had been misused numerous times in the state of Maharashtra.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Even the banning of James Laine’s book &lt;i&gt;Shivaji&lt;/i&gt; happened under section 295 A, and the ban was subsequently held to have been unlawful. What makes this seem ironic, and almost a parodic news report, is the fact that &lt;a href="http://www.firstpost.com/topic/person/bal-thackeray-profile-22424.html" target="_blank"&gt;Bal Thackeray&lt;/a&gt; probably violated this provision more times than most other politicians, but was only charged under it once or twice”, he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Dhada’s status update reportedly read, “People like Thackeray are born and die daily and one should not observe a bandh for that.” A friend of hers who ‘liked’ the comment was also arrested.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Prakash said that the arrest called for a discussion on the regulation of speech and expression. “It being a Facebook status update should not grant it any special immunity; the fact of that update not being punishable under s.295 A should! It isn’t regulation of social media that needs to be discussed, but regulation of speech and expression”, he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;News of the arrest has understandably drawn a lot of attention on social media, and forums like Facebook and Twitter reflected outrage at the news.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The &lt;i&gt;Times of India&lt;/i&gt; &lt;a href="http://m.timesofindia.com/city/mumbai/Sainiks-belie-Mumbais-fears-keep-the-peace-in-last-walk-with-general/articleshow/17274802.cms" target="_blank"&gt;also reported &lt;/a&gt;that a mob of Shiv Sena workers attacked and ransacked the girl’s uncle’s orthopaedic clinic at Palghar, even though she withdrew her comment and apologised.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/first-post-india-nov-19-2012-arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a'&gt;https://cis-india.org/news/first-post-india-nov-19-2012-arrest-of-girl-over-thackeray-fb-update-clear-misuse-of-sec-295a&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Public Accountability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2012-11-20T12:00:53Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/medianama-nikhil-pahwa-may-4-2017-around-130-135-m-aadhaar-numbers-published-on-four-sites-alone">
    <title>Around 130-135M Aadhaar Numbers published on 4 sites alone</title>
    <link>https://cis-india.org/internet-governance/news/medianama-nikhil-pahwa-may-4-2017-around-130-135-m-aadhaar-numbers-published-on-four-sites-alone</link>
    <description>
        &lt;b&gt;“Therefore, there is no data leak, there is no systematic problem, but, if any one tries to be smart, the law ignites into action.” – Ravi Shankar Prasad, IT Minister, in the Rajya Sabha, on 10th April 2017.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Nikhil Pahwa was &lt;a class="external-link" href="http://www.medianama.com/2017/05/223-aadhaar-numbers-data-leak/"&gt;published by Medianama&lt;/a&gt; on May 4, 2017.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;Details of around 130-135 million Aadhaar Numbers, and around 100  million bank numbers have been leaked online by just four government  schemes alone: the National Social Assistance Programme, the National  Rural Employment Guarantee Scheme (NREGA), Daily Online Payments Reports  under NREGA (Govt of Andhra Pradesh), and the Chandranna Bima Scheme  (Govt of Andhra Pradesh), as per a research report from the Centre for  Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Download the report &lt;a href="http://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof-a-documentation-of-public-availability-of-aadhaar-numbers-with-sensitive-personal-financial-information/at_download/file" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/b&gt; Read full story on &lt;a class="external-link" href="http://www.medianama.com/2017/05/223-aadhaar-numbers-data-leak/"&gt;Medianama website&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/medianama-nikhil-pahwa-may-4-2017-around-130-135-m-aadhaar-numbers-published-on-four-sites-alone'&gt;https://cis-india.org/internet-governance/news/medianama-nikhil-pahwa-may-4-2017-around-130-135-m-aadhaar-numbers-published-on-four-sites-alone&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-05-20T10:52:26Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/scroll-may-2-2017-around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report">
    <title>Around 13 crore Aadhaar numbers easily available on government portals, says report</title>
    <link>https://cis-india.org/internet-governance/news/scroll-may-2-2017-around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report</link>
    <description>
        &lt;b&gt;A report by The Centre for Internet and Society claimed that around 13 crore Aadhaar numbers and 10 crore bank account numbers were easily accessible on four government portals built to oversee welfare schemes. The document, released on Monday, pointed out that though it is illegal to reveal Aadhaar numbers, the government portals examined made it easy for anyone to access them, as well as other data about beneficiaries of welfare schemes including in many cases their bank account numbers. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;This was &lt;a href="https://scroll.in/latest/836271/around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report"&gt;published by Scroll.in&lt;/a&gt; on May 2, 2017.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/information-security-practices-of-aadhaar-or-lack-thereof-a-documentation-of-public-availability-of-aadhaar-numbers-with-sensitive-personal-financial-information-1"&gt;The report&lt;/a&gt; suggests that the Aadhaar numbers       leaked could actually be closer to 23 crore, if most of the       government portals connected to direct benefit transfers used the       same negligent standards for storing data as the ones examined.       “It is extremely irresponsible on the part of the UIDAI [Unique       Identification Authority of India], the sole governing body for       this massive project, to turn a blind eye to the lack of standards       prescribed for how other bodies shall deal with such data, such       cases of massive public disclosures of this data, and the myriad       ways in which it may used for mischief,” the authors of the report       said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The document also pointed out that the breaches       are an indicator of “potentially irreversible privacy harm” and       said the data could be used for financial fraud. The report       authored by Amber Sinha and Srinivas Kodali studied the National       Social Assistance Programme, National Rural Employment Guarantee       Scheme, Andhra Pradesh government’s Chandranna Bima Scheme and       Andhra Pradesh’s Daily Online Payment Reports of NREGA.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the report said the Aadhaar initiative as a       concept may be praiseworthy, the absence of adequate security       could prove disastrous. “Sensitive personal identity information       such as Aadhaar number, caste, religion, address, photographs and       financial information are only a few clicks away and suggest how       poorly conceived these initiatives are,” the report said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Centre had, on April 25, cautioned states &lt;a href="https://scroll.in/latest/835658/centre-cautions-states-against-leak-of-aadhaar-data"&gt;against         leaking Aadhaar information&lt;/a&gt;, after it emerged that a &lt;a href="https://scroll.in/article/835546/the-centres-casual-response-to-aadhaar-data-breaches-spells-trouble"&gt;number         of government websites&lt;/a&gt; were making it easy for people to       access individuals’ Aadhaar numbers. The Unique Identification       Authority of India also &lt;a href="https://scroll.in/latest/835056/uidai-files-firs-against-eight-websites-for-offering-aadhaar-enrolment-services-illegally"&gt;filed&lt;/a&gt; First Information Reports against eight private websites for       collecting Aadhaar-related data from citizens in an unauthorised       manner on April 19, but no such action appears to have been taken       against government websites so far.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to government data, the UIDAI has       issued 112 crore Aadhaar numbers so far and has maintained that       its biometrics database is tamper-proof, although it is up to       various other authorities to maintain the secrecy of Aadhaar data       collected or kept by them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On April 21, the Supreme Court had questioned the       Centre for making the Aadhaar card mandatory for a number of       central schemes despite its repeated orders that the unique       identification programme cannot be made mandatory. The government       has nevertheless been expanding the scope of the Unique Identity       project over the past few months by introducing it for initiatives       such as the midday meal scheme of school lunches for children,       and, most recently, requiring Aadhaar to file income tax returns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In March, an Aadhaar enrolment agency had been       de-registered for leaking the personal data of cricketer Mahendra       Singh Dhoni.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/scroll-may-2-2017-around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report'&gt;https://cis-india.org/internet-governance/news/scroll-may-2-2017-around-13-crore-aadhaar-numbers-easily-available-on-government-portals-says-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-05-03T15:29:12Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/are-we-throwing-our-data-protection-regimes-under-the-bus">
    <title>Are we Throwing our Data Protection Regimes under the Bus? </title>
    <link>https://cis-india.org/internet-governance/blog/are-we-throwing-our-data-protection-regimes-under-the-bus</link>
    <description>
        &lt;b&gt;In this blog post Rohan examines why the principle of consent is providing us increasingly less of an aegis in protecting our data. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Consent is complicated. What we think of as reasonably obtained consent varies substantially with the circumstance. For example, in treating rape cases, the UK justice system has moved to recognise complications like alcohol and its effect on explicit consent&lt;a href="#_ftn1" name="_ftnref1"&gt;[1]&lt;/a&gt;. Yet in contracts, consent may be implied simply when one person accepts another’s work on a contract without objections&lt;a href="#_ftn2" name="_ftnref2"&gt;[2]&lt;/a&gt;. These situations highlight the differences between the various forms of informed consent and the implications on its validity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Consent has emerged as a key principle in regulating the use of personal data, and different countries have adopted different regimes, ranging from the comprehensive regimes like of the EU to more sectoral approaches like that in the USA. However, in our modern epoch characterised by the big data analytics that are now commonplace, many commentators have challenged the efficacy and relevance of consent in data protection. I argue that we may even risk throwing our data protection regimes under the proverbial bus should we continue to focus on consent as a key pillar of data protection.&lt;/p&gt;
&lt;h3&gt;Consent as a tool in Data Protection Regimes&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In fact, even a cursory review of current data protection laws around the world shows the extent of the law’s reliance on consent. In the EU for example, Article 7 of the Data Protection Directive, passed in 1995, provides that data processing is only legitimate when “the data subject has unambiguously given his consent”&lt;a href="#_ftn3" name="_ftnref3"&gt;[3]&lt;/a&gt;. Article 8, which guards against processing of sensitive data, provides that such prohibitions may be lifted when “the data subject has given his explicit consent to the processing of those data”&lt;a href="#_ftn4" name="_ftnref4"&gt;[4]&lt;/a&gt;. Even as the EU attempts to strengthen data protection within the bloc with the proposed reforms to data protection&lt;a href="#_ftn5" name="_ftnref5"&gt;[5]&lt;/a&gt;, the focus on the consent of data subject remains strong. There are proposals for an “unambiguous consent by the data subject”&lt;a href="#_ftn6" name="_ftnref6"&gt;[6]&lt;/a&gt; requirement to be put in place. Such consent will be mandatory before any data processing can occur&lt;a href="#_ftn7" name="_ftnref7"&gt;[7]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Despite adopting very different overall approaches to data protection and privacy, consent is an equally integral part of data protection frameworks in the USA. In his book Protectors of Privacy&lt;a href="#_ftn8" name="_ftnref8"&gt;[8]&lt;/a&gt;, Abraham Newman describes two main types of privacy legislation: comprehensive and limited. He argues that places like the EU have adopted comprehensive regimes, which primarily seek to protect individuals because of the “informational and power asymmetry” between individuals and organisations&lt;a href="#_ftn9" name="_ftnref9"&gt;[9]&lt;/a&gt;. On the other hand, he classifies the American approach as limited, focusing on more sectoral protections and principles of fair information practice instead of overarching legislation&lt;a href="#_ftn10" name="_ftnref10"&gt;[10]&lt;/a&gt;. These sectors include the Fair Credit Reporting Act&lt;a href="#_ftn11" name="_ftnref11"&gt;[11]&lt;/a&gt; (which governs consumer credit reporting), the Privacy Act&lt;a href="#_ftn12" name="_ftnref12"&gt;[12]&lt;/a&gt; (which governs data collected by Federal government) and Electronic Communications Privacy Act&lt;a href="#_ftn13" name="_ftnref13"&gt;[13]&lt;/a&gt; (which deals with email communications) among others. However, the Federal Trade Commission describes itself as having only “limited authority over the collection and dissemination of personal data collected online”&lt;a href="#_ftn14" name="_ftnref14"&gt;[14]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is because the general data processing that is commonplace in today’s era of big data is only regulated by the privacy protections that come from the Federal Trade Commission’s (FTC) Fair Information Practice Principles (FIPPs). Expectedly, consent is equally important under the FTC’s FIPPs. The FTC describes the principle of consent as “the second widely-accepted core principle of fair information practice”&lt;a href="#_ftn15" name="_ftnref15"&gt;[15]&lt;/a&gt; in addition to the principle of notice. Other guidelines on fair data processing published by organisations like the Organisation for Economic Cooperation and Development&lt;a href="#_ftn16" name="_ftnref16"&gt;[16]&lt;/a&gt; (OECD) or Canadian Standards Association&lt;a href="#_ftn17" name="_ftnref17"&gt;[17]&lt;/a&gt; (CSA) also include consent as a key mechanism in data protection.&lt;/p&gt;
&lt;h3&gt;The origins of consent in privacy and data protection&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Given the clearly extensive reliance on consent in data protection, it seems prudent to examine the origins of consent in privacy and data protection. Just why does consent have so much weight in data protection?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One reason is that data protection, along with inextricably linked concerns about privacy, could be said to be rooted in protecting private property. It was argued that the “early parameters of what was to become the right to privacy were set in cases dealing with unconventional property claims”&lt;a href="#_ftn18" name="_ftnref18"&gt;[18]&lt;/a&gt;, such as unconsented publication of personal letters&lt;a href="#_ftn19" name="_ftnref19"&gt;[19]&lt;/a&gt; or photographs&lt;a href="#_ftn20" name="_ftnref20"&gt;[20]&lt;/a&gt;. It was the publication of Brandeis and Warren’s well-known article “The Right to Privacy”&lt;a href="#_ftn21" name="_ftnref21"&gt;[21]&lt;/a&gt;, that developed “the current philosophical dichotomy between privacy and property rights”&lt;a href="#_ftn22" name="_ftnref22"&gt;[22]&lt;/a&gt;, as they asserted that privacy protections ought to be recognised as a right in and of themselves and needed separate protection&lt;a href="#_ftn23" name="_ftnref23"&gt;[23]&lt;/a&gt;. Indeed, it was Warren and Brandeis who famously borrowed Justice Cooley's expression that privacy is the “right to be let alone”&lt;a href="#_ftn24" name="_ftnref24"&gt;[24]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the other side of the debate are scholars like Epstein and Posner, who see privacy protections as part of protecting personal property under tort law&lt;a href="#_ftn25" name="_ftnref25"&gt;[25]&lt;/a&gt;. However, the central point is that most scholars seem to acknowledge the relationship between privacy and private property. Even Brandeis and Warren themselves argued that one general aim of privacy is “to protect the privacy of private life, and to whatever degree and in whatever connection a man's life has ceased to be private”&lt;a href="#_ftn26" name="_ftnref26"&gt;[26]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is also important to locate the idea of consent within the domain of privacy and private property protections. Ostensibly, consent seems to have the effect of lessening the privacy protections afforded in a particular situation to a person, because by acquiescing to the situation, one could be seen as waiving their privacy concerns. Brandeis and Warren concur with this position as they acknowledge how “the right to privacy ceases upon the publication of the facts by the individual, or with his consent”&lt;a href="#_ftn27" name="_ftnref27"&gt;[27]&lt;/a&gt;. They assert that this is “but another application of the rule which has become familiar in the law of literary and artistic property”&lt;a href="#_ftn28" name="_ftnref28"&gt;[28]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Perhaps the most eloquent articulation of the importance of consent in privacy comes from Sir Edward Coke’s idea that “every man’s house is his castle”&lt;a href="#_ftn29" name="_ftnref29"&gt;[29]&lt;/a&gt;. Though the ‘Castle Doctrine’ has been used as a justification for protecting one’s property with the use of force&lt;a href="#_ftn30" name="_ftnref30"&gt;[30]&lt;/a&gt;, I think that implied in the idea of the ‘Castle Doctrine’ is that consent is necessary in order to preserve privacy. If not, why would anyone be justified in preventing trespass, other than to prevent unconsented entry or use of their property. The doctrine of “Volenti non fit injuria”&lt;a href="#_ftn31" name="_ftnref31"&gt;[31]&lt;/a&gt;, or ‘to one who consents no injury is done’, is thus the very embodiment of the role of consent in protecting private property. And as conceptions of private property develop to recognise that the data one gives out is part of his private property, for example in &lt;i&gt;US v. Jones&lt;/i&gt;, which led scholars to assert that “people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties”&lt;a href="#_ftn32" name="_ftnref32"&gt;[32]&lt;/a&gt;, so does consent act as an important aspect of privacy protection.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Yet, linking privacy with private property is not universally accepted as the conception of privacy. For instance, Alan Westin, in his book Privacy and Freedom&lt;a href="#_ftn33" name="_ftnref33"&gt;[33]&lt;/a&gt;, describes privacy as “the right to control information about oneself”&lt;a href="#_ftn34" name="_ftnref34"&gt;[34]&lt;/a&gt;. Another scholar, Ruth Gavison, contends instead that “our interest in privacy is related to our concern over our accessibility to others: the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others' attention”&lt;a href="#_ftn35" name="_ftnref35"&gt;[35]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While these alternative notions about privacy’s foundational principles may differ from those related to linking privacy with private property, locating consent within these formulations of privacy is possible. Regarding Westin’s argument, I think that implicit in the right to control one’s information are ideas about individual autonomy, which is exercised through giving or withholding one’s consent. Similarly, Gavison herself states that privacy functions to advance “liberty, autonomy and selfhood”&lt;a href="#_ftn36" name="_ftnref36"&gt;[36]&lt;/a&gt;. Consent plays a key role in upholding this liberty, autonomy and selfhood that privacy affords us. Clearly therefore, it is far from unfounded to claim that consent is an integral part of protecting privacy.&lt;/p&gt;
&lt;h3&gt;Consent, Big Data and Data protection&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Given the solid underpinnings of the principle of consent in privacy protection, it was hardly a coincidence that consent became an integral part of data protection. However, with the rise of big data practices, one quickly finds that consent ceases to work effectively as a tool for protecting privacy. In a big data context, Solove argues that privacy regulation rooted in consent is ineffective, because garnering consent amidst ubiquitous data collection for all the online services one uses as part of daily life is unmanageable&lt;a href="#_ftn37" name="_ftnref37"&gt;[37]&lt;/a&gt;. Additionally, the secondary uses of one’s data are difficult to assess at the point of collection, and subsequently meaningful consent for secondary use is difficult to obtain&lt;a href="#_ftn38" name="_ftnref38"&gt;[38]&lt;/a&gt;. This section examines these two primary consequences of prioritising consent amidst Big data practises.&lt;/p&gt;
&lt;h3&gt;Consent places unrealistic and unfair expectations on the Individual&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;As noted by Tene and Polonetsky, the first concern is that current privacy frameworks which emphasize informed consent “impose significant, sometimes unrealistic, obligations on both organizations and individuals”&lt;a href="#_ftn39" name="_ftnref39"&gt;[39]&lt;/a&gt;. The premise behind this argument stems from the way that consent is often garnered by organisations, especially regarding use of their services. An examination of various terms of use policies from banks, online video streaming websites, social networking sites, online fashion or more general online shopping websites reveals a deluge of information that the user has to comprehend. Moreover, there are a too many “entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity”&lt;a href="#_ftn40" name="_ftnref40"&gt;[40]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As Cate and Mayer-Schönberger note in the Microsoft Global Privacy Summit Summary Report, “almost everywhere that individuals venture, especially online, they are presented with long and complex privacy notices routinely written by lawyers for lawyers, and then requested to either “consent” or abandon the use of the desired service”&lt;a href="#_ftn41" name="_ftnref41"&gt;[41]&lt;/a&gt;. In some cases, organisations try to simplify these policies for the users of their service, but such initiatives make up the minority of terms of use policies. Tene and Polonetsky assert that “it is common knowledge among practitioners in the field that privacy policies serve more as liability disclaimers for businesses than as assurances of privacy for consumers”&lt;a href="#_ftn42" name="_ftnref42"&gt;[42]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, it is equally important to consider the principle of consent from perspective of companies. At a time where many businesses have to comply with numerous regulations and processes in the name of ‘compliance’&lt;a href="#_ftn43" name="_ftnref43"&gt;[43]&lt;/a&gt;, the obligations for obtaining consent could burden some businesses. Firms have to gather consent amidst enhancing user or customer experiences, which represents a tricky balance to find. For example, requiring consent at every stage may make the user experience much worse. Imagine having to give consent for your profile to be uploaded every time you make a high score in a video game? At the same time, “organizations are expected to explain their data processing activities on increasingly small screens and obtain consent from often-uninterested individuals”&lt;a href="#_ftn44" name="_ftnref44"&gt;[44]&lt;/a&gt;. Given these factors, it is somewhat understandable for companies to garner consent for all possible (secondary) uses as otherwise it is not feasible to keep collecting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Nonetheless, this results in situations where “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”&lt;a href="#_ftn45" name="_ftnref45"&gt;[45]&lt;/a&gt;.The totality of the situation shows the odds stacked against the individual. It could be even argued that this is one manifestation of the informational and power asymmetry that exists between individuals and organisations&lt;a href="#_ftn46" name="_ftnref46"&gt;[46]&lt;/a&gt;, because users may unwittingly agree to unfair, unclear or even unknown terms and conditions and data practices. Not only are individuals greatly misinformed about data collected about them, but the vast majority of people do not even read these Terms and Conditions or End User license agreements&lt;a href="#_ftn47" name="_ftnref47"&gt;[47]&lt;/a&gt;. Solove also argues that “people often lack enough expertise to adequately assess the consequences of agreeing to certain present uses or disclosures of their data”&lt;a href="#_ftn48" name="_ftnref48"&gt;[48]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the organisational practice of providing extensive and complicated terms of use policies is not illegal, the fact that by one estimation, it may take you would have to take 76 working days to review the privacy policies you have agreed to online&lt;a href="#_ftn49" name="_ftnref49"&gt;[49]&lt;/a&gt;, or by another, that in the USA the opportunity cost society incurs in reading privacy policies is $781 billion&lt;a href="#_ftn50" name="_ftnref50"&gt;[50]&lt;/a&gt;, should not go unnoticed. I do think it is unfair for the law to put users into such situations, where they are “forced to make overly complex decisions based on limited information”&lt;a href="#_ftn51" name="_ftnref51"&gt;[51]&lt;/a&gt;. There have been laudable attempts by some government organisations like Canada’s Office of the Privacy Commissioner and USA’s Federal Trade Commission to provide guidance to firms to make their privacy policies more accessible&lt;a href="#_ftn52" name="_ftnref52"&gt;[52]&lt;/a&gt;. However, these are hard to enforce. Therefore, it can be assumed that when users have neither the expertise nor the rigour to review privacy policies effectively, the consent they provide would naturally be far from informed.&lt;/p&gt;
&lt;h3&gt;Secondary use, Aggregation and Superficial Consent&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;What amplifies this informational asymmetry is the potential for the aggregation of individual’s data and subsequent secondary use of that data collected. “Even if people made rational decisions about sharing individual pieces of data in isolation, they greatly struggle to factor in how their data might be aggregated in the future”&lt;a href="#_ftn53" name="_ftnref53"&gt;[53]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This has to do with the prevalence of big data analytics that characterizes our modern epoch, and has major implications for the nature and meaningfulness of the consent users provide. By definition, “big data analysis seeks surprising correlations”&lt;a href="#_ftn54" name="_ftnref54"&gt;[54]&lt;/a&gt; and some of its most insightful results are counterintuitive and nearly impossible to conceive at the point of primary data collection. One noteworthy example comes from the USA, with the predictive analytics of Walmart. By studying purchasing patterns of its loyalty card holders&lt;a href="#_ftn55" name="_ftnref55"&gt;[55]&lt;/a&gt;, the company ascertained that prior to a hurricane the most popular items that people tend to buy are actually Pop Tarts (a pre-baked toaster pastry) and Beer&lt;a href="#_ftn56" name="_ftnref56"&gt;[56]&lt;/a&gt;. These correlations are highly counterintuitive and far from what people expect to be necessities before a hurricane. These insights led to Walmart stores being stocked with the most relevant products at the time of need. This is one example of how data might be repurposed and aggregated for a novel purpose, but nonetheless the question about the nature of consent obtained by Walmart for the collection and analysis of the shopping habits of its loyalty card holders stands.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One reason secondary uses make consent less meaningful has been articulated by De Zwart et al, who observe that “the idea of consent becomes unworkable in an environment where it is not known, even by the people collecting and selling data, what will happen to the data”&lt;a href="#_ftn57" name="_ftnref57"&gt;[57]&lt;/a&gt;. Taken together with Solove’s aggregation effect, two points become apparent:&lt;/p&gt;
&lt;ol&gt;
&lt;li style="text-align: justify; "&gt;Data we consent to be collected about us may be aggregated with other data we may have revealed in the past. While separately they may be innocuous, there is a risk of future aggregation to create new information which one may find overly intrusive and not consent to. However, current data protection regimes make it hard for one to provide such consent, because there is no way for the user to know how his past and present data may be aggregated in the future.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;Data we consent to be collected for one specific purpose may be used in a myriad of other ways. The user has virtually no way to know how their data might be repurposed because often time neither do the collectors of that data&lt;a href="#_ftn58" name="_ftnref58"&gt;[58]&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;Therefore, regulators reliance on principles of purpose limitation and the mechanism of consent for robust data protection seems suboptimal at the very least, as big data practices of aggregation, repurposing and secondary uses become commonplace.&lt;/p&gt;
&lt;h3&gt;Other problems with the mechanism of consent in the context of Big Data&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;On one end of the spectrum are situations where organisations garner consent for future secondary uses at the time of data collection. As discussed earlier, this is currently the common practice for organisations and the likelihood of users providing informed consent is low.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, equally valid is considering the situations on the other end of the spectrum, where obtaining user consent for secondary use becomes too expensive and cumbersome&lt;a href="#_ftn59" name="_ftnref59"&gt;[59]&lt;/a&gt;. As a result, potentially socially valuable secondary use of data for research and innovation or simply “the practice of informed and reflective citizenship”&lt;a href="#_ftn60" name="_ftnref60"&gt;[60]&lt;/a&gt; may not take place. While potential social research may be hindered by the consent requirement, the reality that one cannot give meaningful consent to an unknown secondary uses of data is more pressing. Essentially, not knowing what you are consenting to scarcely provides the individual with any semblance of strong privacy protections and so the consent that individuals provide is superficial at best.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Many scholars also point to the binary nature of consent as it stands today&lt;a href="#_ftn61" name="_ftnref61"&gt;[61]&lt;/a&gt;. Solove describes consent in data protection as nuanced&lt;a href="#_ftn62" name="_ftnref62"&gt;[62]&lt;/a&gt; while Cate and Mayer-Schönberger go further to assert that “binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data”. This dichotomous nature of consent further reduces its usefulness in data protection regimes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Whether data collection is opted into or opted out of also has a bearing on the nature of the consent obtained. Many argue that regulations with options to opt out are not effective as “opt-out consent might be the product of mere inertia or lack of awareness of the option to opt out”&lt;a href="#_ftn63" name="_ftnref63"&gt;[63]&lt;/a&gt;. This is in line with initiatives around the world to make gathering consent more explicit by having options to opt in instead of opt out. Noted articulations of the impetus to embrace opt in regimes include ex FTC chairman Jon Leibowitz as early as 2007&lt;a href="#_ftn64" name="_ftnref64"&gt;[64]&lt;/a&gt;, as well as being actively considered by the EU in the reform of their data protection laws&lt;a href="#_ftn65" name="_ftnref65"&gt;[65]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, as Solove rightly points out, opt in consent is problematic as well&lt;a href="#_ftn66" name="_ftnref66"&gt;[66]&lt;/a&gt;. There are a few reasons for this: first, that many data collectors have the “sophistication and motivation to find ways to generate high opt-in rates”&lt;a href="#_ftn67" name="_ftnref67"&gt;[67]&lt;/a&gt; by “conditioning products, services, or access on opting in”&lt;a href="#_ftn68" name="_ftnref68"&gt;[68]&lt;/a&gt;. In essence, they leave individuals no choice but to opt into data collection because using their particular product or service is dependant or ‘conditional’ on explicit consent. A pertinent example of this is the end-user license agreement to Apple’s iTunes Store&lt;a href="#_ftn69" name="_ftnref69"&gt;[69]&lt;/a&gt;. Solove rightly notes that “if people want to download apps from the store, they have no choice but to agree. This requirement is akin to an opt-in system — affirmative consent is being sought. But hardly any bargaining or choosing occurs in this process”&lt;a href="#_ftn70" name="_ftnref70"&gt;[70]&lt;/a&gt;. Second, as stated earlier, obtaining consent runs the risk of impeding potential innovation or research because it is too cumbersome or expensive to obtain&lt;a href="#_ftn71" name="_ftnref71"&gt;[71]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Third, as Tene and Polonetsky argue, “collective action problems threaten to generate a suboptimal equilibrium where individuals fail to opt into societally beneficial data processing in the hope of free-riding on others’ good will”&lt;a href="#_ftn72" name="_ftnref72"&gt;[72]&lt;/a&gt;. A useful example to illustrate this comes from another context where obtaining consent is the difference between life and death: organ donation. The gulf in consenting donors between countries with an opt in regime for organ donation and countries with an opt out regime is staggering. Even countries that are culturally similar, such as Austria and Germany, exhibit vast differences in donation rates – Austria at 99% compared to just 12% in Germany&lt;a href="#_ftn73" name="_ftnref73"&gt;[73]&lt;/a&gt;. This suggests that in terms of obtaining consent (especially for socially valuable actions), opt in methods may be limiting, because people may have an aversion to anything being presumed about their choices, even if costs of opting out are low&lt;a href="#_ftn74" name="_ftnref74"&gt;[74]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;What the above section demonstrates is how consent may be somewhat limited as a tool for data protection regimes, especially in a big data context. That said, consent is not in itself a useless or outdated concept. The problems raised above articulate the problems that relying on consent extensively pose in a big data context. Consent should still remain a part of data protection regimes. However, there are both better ways to obtain consent (for organisations that collect data) as well as other areas to focus regulatory attention on aside from the time of data collection.&lt;/p&gt;
&lt;h3&gt;What can organisations do better to obtain more meaningful consent&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Organisations that collect data could alter the way the obtain user consent. Most people can attest to having checked a box that was lying surreptitiously next to the words ‘I agree’, thereby agreeing to the Terms and Conditions or End-user License Agreement for a particular service or product. This is in line with the need for both parties to assent to the terms of a contract as part of making valid a contract&lt;a href="#_ftn75" name="_ftnref75"&gt;[75]&lt;/a&gt;. Some of the more common types of online agreements that users enter into are Clickwrap and Browsewrap agreements. A Clickwrap agreement is “formed entirely in an online environment such as the Internet, which sets forth the rights and obligations between parties”&lt;a href="#_ftn76" name="_ftnref76"&gt;[76]&lt;/a&gt;. They “require a user to click "I agree" or “I accept” before the software can be downloaded or installed”&lt;a href="#_ftn77" name="_ftnref77"&gt;[77]&lt;/a&gt;. On the other hand, Browsewrap agreements “try to characterize your simple use of their website as your ‘agreement’ to a set of terms and conditions buried somewhere on the site”&lt;a href="#_ftn78" name="_ftnref78"&gt;[78]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Because Browsewrap agreements do not “require a user to engage in any affirmative conduct”&lt;a href="#_ftn79" name="_ftnref79"&gt;[79]&lt;/a&gt;, the kind of consent that these types of agreements obtain is highly superficial. In fact, many argue that such agreements are slightly unscrupulous because users are seldom aware that such agreements exist&lt;a href="#_ftn80" name="_ftnref80"&gt;[80]&lt;/a&gt;, often hidden in small print&lt;a href="#_ftn81" name="_ftnref81"&gt;[81]&lt;/a&gt; or below the download button&lt;a href="#_ftn82" name="_ftnref82"&gt;[82]&lt;/a&gt; for example. And the courts have begun to consider such terms and practices unfair, which “hold website users accountable for terms and conditions of which a reasonable Internet user would not be aware just by using the site”&lt;a href="#_ftn83" name="_ftnref83"&gt;[83]&lt;/a&gt;. For example, In &lt;i&gt;re Zappos.com Inc., Customer Data Security Breach Litigation&lt;/i&gt;, the court said of their Terms of Use (which is in a browsewrap agreement):&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The Terms of Use is inconspicuous, buried in the middle to bottom of every Zappos.com webpage among many other links, and the website never directs a user to the Terms of Use. No reasonable user would have reason to click on the Terms of Use”&lt;a href="#_ftn84" name="_ftnref84"&gt;[84]&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Clearly, courts recognise the potential for consent or assent to be obtained in a hardly transparent or hands on manner. Organisations that collect data should be aware of this and consider other options for obtaining consent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A few commentators have suggested that organisations switch to using Clickwrap or clickthrough agreements to obtain consent. Undergirding this argument is the fact that courts have on numerous occasions, upheld the validity of a Clickwrap agreement. Such cases include &lt;i&gt;Groff v. America Online, Inc&lt;a href="#_ftn85" name="_ftnref85"&gt;&lt;b&gt;[85]&lt;/b&gt;&lt;/a&gt;&lt;/i&gt; and &lt;i&gt;Hotmail Corporation v. Van Money Pie, Inc&lt;a href="#_ftn86" name="_ftnref86"&gt;&lt;b&gt;[86]&lt;/b&gt;&lt;/a&gt;&lt;/i&gt;. These cases built upon the precedent-setting case of &lt;i&gt;Pro CD v. Zeidenberg&lt;/i&gt;, in which the court ruled that “Shrinkwrap licenses are enforceable unless their terms are objectionable on grounds applicable to contracts in general”&lt;a href="#_ftn87" name="_ftnref87"&gt;[87]&lt;/a&gt;. Shrinkwrap licenses, which refer to end user license agreements printed on the shrinkwrap of a software product which a user will definitely notice and have the opportunity to read before opening and using the product, and the rules that govern them, have seen application to clickthrough agreements. As Bayley rightly noted, the validity of clickthrough agreements is dependent on “reasonable notice and opportunity to review—whether the placement of the terms and click-button afforded the user a reasonable opportunity to find and read the terms without much effort”&lt;a href="#_ftn88" name="_ftnref88"&gt;[88]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From the perspective of companies and other organisations which attempt to garner consent from users to collect and process their data, utilizing Clickwrap agreements might be one useful solution to consider in obtaining more meaningful and informed consent. In fact Bayley contends that clear Clickwrap agreements are “the “best practice” mechanism for creating a contractual relationship between an online service and a user”&lt;a href="#_ftn89" name="_ftnref89"&gt;[89]&lt;/a&gt;. He suggests the following mechanism for acquiring clear and informed consent via contractual agreement&lt;a href="#_ftn90" name="_ftnref90"&gt;[90]&lt;/a&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li style="text-align: justify; "&gt;Conspicuously present the TOS to the user prior to any payment (or other commitment by the user) or installation of software (or other changes to a user’s machine or browser, like cookies, plug-ins, etc.)&lt;/li&gt;
&lt;li&gt;Allow the user to easily read and navigate all of the terms (i.e. be in a normal, readable typeface with no scroll box)&lt;/li&gt;
&lt;li&gt;Provide an opportunity to print, and/or save a copy of, the terms&lt;/li&gt;
&lt;li&gt;Offer the user the option to decline as prominently and by the same method as the option to agree&lt;/li&gt;
&lt;li&gt;Ensure the TOS is easy to locate online after the user agrees.&lt;/li&gt;
&lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;These principles make a lot of sense for organisations, as it requires relatively minor procedural changes instead of more transformational efforts to alter the way the validate their data processing processes entirely.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Herzfield adds two further suggestions to this list. First, organisations should not allow any use of their product or service until “express and active manifestation of assent”&lt;a href="#_ftn91" name="_ftnref91"&gt;[91]&lt;/a&gt;. Also, they should institute processes where users re-iterate their consent and assent to the terms of use&lt;a href="#_ftn92" name="_ftnref92"&gt;[92]&lt;/a&gt;. He goes further to propose a baseline that organisations should follow: “companies should always provide at least inquiry notice of all terms, and require counterparties to manifest assent, through action or inaction, in a manner that reasonable people would clearly understand to be assent”&lt;a href="#_ftn93" name="_ftnref93"&gt;[93]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While obtaining informed and meaningful consent is neither fool proof nor a process which has widely accepted clear steps, what is clear is that current efforts by organisations may be insufficient. As Cate and Mayer-Schönberger note, “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”&lt;a href="#_ftn94" name="_ftnref94"&gt;[94]&lt;/a&gt;. One thing they can do to both ensure more meaningful and informed consent (from the perspective of the users) and preventing potential legal action for unscrupulous or unfair terms is to change the way they obtain consent from opt out to opt in.&lt;/p&gt;
&lt;h3&gt;Conclusion – how should regulation change&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In conclusion, the current emphasis and extensive use of consent in data protection seems to be limited in effectively protecting against illegitimate processing of data in a big data context. More people are starting to use online services extensively. This is coupled by the fact that organisations are realizing the value of collecting and analysing user data to carry out data-driven analytics for insights that can improve the efficacy of the product. Clearly, data protection has never been more crucial.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However not only does emphasising consent seem less relevant, because the consent organisations obtain is seldom informed, but it may even jeopardise the intentions of data protection. Commentators are quick to point out how nimble firms are at acquiring consent in newer ways that may comply with laws but still allow them to maintain their advantageous position of asymmetric power. Kuner, Cate, Millard and Svantesson, all eminent scholars in the field of Big data, asked the prescient question: “Is there a proper role for individual consent?”&lt;a href="#_ftn95" name="_ftnref95"&gt;[95]&lt;/a&gt;They believe consent still has a role, but that finding this role in the Big data context is challenging&lt;a href="#_ftn96" name="_ftnref96"&gt;[96]&lt;/a&gt;. However, there is surprising consensus on the approach that should be taken as data protection regimes shift away from consent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In fact, the alternative is staring at us in the face: data protection regimes have to look elsewhere, to other points along the data analysis process for aspects to regulate and ensure legitimate and fair processing of data. One compelling idea which had broad-based support during the aforementioned Microsoft Privacy Summit was that “new approaches must shift responsibility away from data subjects toward data users and toward a focus on accountability for responsible data stewardship”&lt;a href="#_ftn97" name="_ftnref97"&gt;[97]&lt;/a&gt;, ie creating regulations to guide data processing instead of the data collection. De Zwart et al. suggest that regulation must instead “focus on the processes involved in establishing algorithms and the use of the resulting conclusions”&lt;a href="#_ftn98" name="_ftnref98"&gt;[98]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This might involve regulations relating to requiring data collectors to publish the queries they run on the data. This would be a solution that balances maintaining the ‘trade secret’ of the firm, who has creatively designed an algorithm, with ensuring fairness and legitimacy in data processing. One manifestation of this approach is in conceptualising procedural data due process which “would regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata derived from or associated with personal data) in any adjudicative process, including processes whereby Big Data is being used to determine attributes or categories for an individual”&lt;a href="#_ftn99" name="_ftnref99"&gt;[99]&lt;/a&gt;. While there is debate regarding the usefulness of a data due process, the idea of data due process is just part of the consortium of ideas surrounding alternatives to consent in data protection. The main point is that “greater transparency should be required if there are fewer opportunities for consent or if personal data can be lawfully collected without consent”&lt;a href="#_ftn100" name="_ftnref100"&gt;[100]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is also worth considering exactly what a single use of group or individual’s data is, and what types of uses or processes require a “greater form of authorization”&lt;a href="#_ftn101" name="_ftnref101"&gt;[101]&lt;/a&gt;. Certain data processes could require special affirmative consent to be procured, which is not applicable for other less intimate matters. Canada’s Office of the Privacy Commissioner released a privacy toolkit for organisations, in which they provide some exceptions to the consent principle, one of which is if data collection “is clearly in the individual’s interests and consent is not available in a timely way”&lt;a href="#_ftn102" name="_ftnref102"&gt;[102]&lt;/a&gt;. Some therefore suggest that “if notice and consent are reserved for more appropriate uses, individuals might pay more attention when this mechanism is used”&lt;a href="#_ftn103" name="_ftnref103"&gt;[103]&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another option for regulators is to consider the development and implementation of a sticky privacy policies regime. This refers to “machine-readable policies [that] can stick to data to define allowed usage and obligations as it travels across multiple parties, enabling users to improve control over their personal information”&lt;a href="#_ftn104" name="_ftnref104"&gt;[104]&lt;/a&gt;. Sticky privacy policies seem to alleviate the risk of repurposed, unanticipated uses of data because users who consent to giving out their data will be consenting to how it is used thereafter. However, the counter to sticky policies is that it places even greater obligations on users to decide how they would like their data used, not just at one point but for the long term. To expect organisations to state their purposes for future use of individuals data or that individuals are to give informed consent to such uses seems farfetched from both perspectives.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Still another solution draws from the noted scholar Helen Nissenbaum’s work on privacy. She argues that “the benchmark of privacy is contextual integrity”&lt;a href="#_ftn105" name="_ftnref105"&gt;[105]&lt;/a&gt;. ”Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it”&lt;a href="#_ftn106" name="_ftnref106"&gt;[106]&lt;/a&gt;. According to this line of thinking, legislators should instead focus their attention on what constitutes appropriateness in certain contexts, although this could be a challenging task as contexts merge and understandings of appropriateness change according to the circumstances of a context. .&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While there is little consensus regarding the numerous ways to focus regulatory attention on data processing and the uses of data collected, there is more support for a shift away from consent, as exemplified by the Microsoft privacy Summit:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“There was broad general agreement that privacy frameworks that rely heavily on individual notice and consent are neither sustainable in the face of dramatic increases in the volume and velocity of information flows nor desirable because of the burden they place on individuals to understand the issues, make choices, and then engage in oversight and enforcement.”&lt;a href="#_ftn107" name="_ftnref107"&gt;[107]&lt;/a&gt; I think Cate and Mayer- Schönberger make for the most valid conclusion to this article, as well as to summarise the debate I have presented. They say that “in short, ensuring individual control over personal data is not only an increasingly unattainable objective of data protection, but in many settings it is an undesirable one as well.”&lt;a href="#_ftn108" name="_ftnref108"&gt;[108]&lt;/a&gt; We might very well be throwing the entire data protection regimes under the bus.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;[1]&lt;/a&gt; Gordon Rayner and Bill Gardner, “Men Must Prove a Woman Said ‘Yes’ under Tough New Rape Rules - Telegraph,” &lt;i&gt;The Telegraph&lt;/i&gt;, January 28, 2015, sec. Law and Order, http://www.telegraph.co.uk/news/uknews/law-and-order/11375667/Men-must-prove-a-woman-said-Yes-under-tough-new-rape-rules.html.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;[2]&lt;/a&gt; Legal Information Institute, “Implied Consent,” accessed August 25, 2015, https://www.law.cornell.edu/wex/implied_consent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;[3]&lt;/a&gt; European Parliament, Council of the European Union, &lt;i&gt;Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data&lt;/i&gt;, 1995, http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:31995L0046.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;[4]&lt;/a&gt; See supra note 3.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;[5]&lt;/a&gt; European Commission, “Stronger Data Protection Rules for Europe,” &lt;i&gt;European Commission Press Release Database&lt;/i&gt;, June 15, 2015, http://europa.eu/rapid/press-release_MEMO-15-5170_en.htm.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;[6]&lt;/a&gt; Council of the European Union, “Data Protection: Council Agrees on a General Approach,” June 15, 2015, http://www.consilium.europa.eu/en/press/press-releases/2015/06/15-jha-data-protection/.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;[7]&lt;/a&gt; See supra note 6.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;[8]&lt;/a&gt; Abraham L. Newman, &lt;i&gt;Protectors of Privacy: Regulating Personal Data in the Global Economy&lt;/i&gt; (Ithaca, NY: Cornell University Press, 2008).&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;[9]&lt;/a&gt; See supra note 8, at 24.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref10" name="_ftn10"&gt;[10]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref11" name="_ftn11"&gt;[11]&lt;/a&gt; 15 U.S.C. §1681.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref12" name="_ftn12"&gt;[12]&lt;/a&gt; 5 U.S.C. § 552a.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref13" name="_ftn13"&gt;[13]&lt;/a&gt; 18 U.S.C. § 2510-22.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref14" name="_ftn14"&gt;[14]&lt;/a&gt; Federal Trade Commission, “Privacy Online: A Report to Congress,” June 1998, https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf: 40.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref15" name="_ftn15"&gt;[15]&lt;/a&gt; See supra note 14, at 8.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref16" name="_ftn16"&gt;[16]&lt;/a&gt; Organisation for Economic Cooperation and Development, “2013 OECD Privacy Guidelines,” 2013, http://www.oecd.org/internet/ieconomy/privacy-guidelines.htm.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref17" name="_ftn17"&gt;[17]&lt;/a&gt; Canadian Standards Association, “Canadian Standards Association Model Code,” March 1996, https://www.cippguide.org/2010/06/29/csa-model-code/.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref18" name="_ftn18"&gt;[18]&lt;/a&gt; Mary Chlopecki, “The Property Rights Origins of Privacy Rights | Foundation for Economic Education,” August 1, 1992, http://fee.org/freeman/the-property-rights-origins-of-privacy-rights.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref19" name="_ftn19"&gt;[19]&lt;/a&gt; See &lt;i&gt;Pope v&lt;/i&gt;.&lt;i&gt; Curl &lt;/i&gt;(1741), available &lt;a href="http://www.commonlii.org/uk/cases/EngR/1741/500.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref20" name="_ftn20"&gt;[20]&lt;/a&gt; See &lt;i&gt;Prince Albert v. Strange&lt;/i&gt; (1849), available &lt;a href="http://www.bailii.org/ew/cases/EWHC/Ch/1849/J20.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref21" name="_ftn21"&gt;[21]&lt;/a&gt; Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” &lt;i&gt;Harvard Law Review&lt;/i&gt; 4, no. 5 (December 15, 1890): 193–220, doi:10.2307/1321160.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref22" name="_ftn22"&gt;[22]&lt;/a&gt; See supra note 18.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref23" name="_ftn23"&gt;[23]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref24" name="_ftn24"&gt;[24]&lt;/a&gt; See supra note 21.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref25" name="_ftn25"&gt;[25]&lt;/a&gt; See for example, Richard Epstein, “Privacy, Property Rights, and Misrepresentations,” &lt;i&gt;Georgia Law Review&lt;/i&gt;, January 1, 1978, 455. And Richard Posner, “The Right of Privacy,” &lt;i&gt;Sibley Lecture Series&lt;/i&gt;, April 1, 1978, http://digitalcommons.law.uga.edu/lectures_pre_arch_lectures_sibley/22.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref26" name="_ftn26"&gt;[26]&lt;/a&gt; See supra note 21, at 215.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref27" name="_ftn27"&gt;[27]&lt;/a&gt; &lt;a href="http://www.english.illinois.edu/-people-/faculty/debaron/582/582%20readings/right%20to%20privacy.pdf"&gt;See supra note 21, at 218&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref28" name="_ftn28"&gt;[28]&lt;/a&gt; &lt;a href="http://www.english.illinois.edu/-people-/faculty/debaron/582/582%20readings/right%20to%20privacy.pdf"&gt;Ibid.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref29" name="_ftn29"&gt;[29]&lt;/a&gt; Adrienne W. Fawcett, “Q: Who Said: ‘A Man’s Home Is His Castle’?,” &lt;i&gt;Chicago Tribune&lt;/i&gt;, September 14, 1997, http://articles.chicagotribune.com/1997-09-14/news/9709140446_1_castle-home-sir-edward-coke.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref30" name="_ftn30"&gt;[30]&lt;/a&gt; Brendan Purves, “Castle Doctrine from State to State,” &lt;i&gt;South Source&lt;/i&gt;, July 15, 2011, http://source.southuniversity.edu/castle-doctrine-from-state-to-state-46514.aspx.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref31" name="_ftn31"&gt;[31]&lt;/a&gt; “Volenti Non Fit Injuria,” &lt;i&gt;E-Lawresources&lt;/i&gt;, accessed August 25, 2015, http://e-lawresources.co.uk/Volenti-non-fit-injuria.php.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref32" name="_ftn32"&gt;[32]&lt;/a&gt; Bryce Clayton Newell, “Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 16, 2013), http://papers.ssrn.com/abstract=2341182.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref33" name="_ftn33"&gt;[33]&lt;/a&gt; Alan Westin, &lt;i&gt;Privacy and Freedom&lt;/i&gt; (Ig Publishing, 2015).&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref34" name="_ftn34"&gt;[34]&lt;/a&gt; Helen Nissenbaum, “Privacy as Contextual Integrity,” &lt;i&gt;Washington Law Review&lt;/i&gt; 79 (2004): 119.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref35" name="_ftn35"&gt;[35]&lt;/a&gt; Ruth Gavison, “Privacy and the Limits of Law,” &lt;i&gt;The Yale Law Journal&lt;/i&gt; 89, no. 3 (January 1, 1980): 421–71, doi:10.2307/795891: 423.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref36" name="_ftn36"&gt;[36]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref37" name="_ftn37"&gt;[37]&lt;/a&gt; Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 4, 2012), &lt;a href="http://papers.ssrn.com/abstract=2171018"&gt;http://papers.ssrn.com/abstract=2171018&lt;/a&gt;: 1888.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref38" name="_ftn38"&gt;[38]&lt;/a&gt; Ibid, at 1889.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref39" name="_ftn39"&gt;[39]&lt;/a&gt; Omer Tene and Jules Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, September 20, 2012), &lt;a href="http://papers.ssrn.com/abstract=2149364"&gt;http://papers.ssrn.com/abstract=2149364&lt;/a&gt;: 261.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref40" name="_ftn40"&gt;[40]&lt;/a&gt; See supra note 37, at 1881.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref41" name="_ftn41"&gt;[41]&lt;/a&gt; Fred H. Cate and Viktor Mayer-Schönberger, “Notice and Consent in a World of Big Data - Microsoft Global Privacy Summit Summary Report and Outcomes,” Microsoft Global Privacy Summit, November 9, 2012, &lt;a href="http://www.microsoft.com/en-us/download/details.aspx?id=35596"&gt;http://www.microsoft.com/en-us/download/details.aspx?id=35596&lt;/a&gt;: 3.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref42" name="_ftn42"&gt;&lt;sup&gt;&lt;sup&gt;[42]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; See supra note 39.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref43" name="_ftn43"&gt;[43]&lt;/a&gt; See for example, US Securities and Exchange Commission, “Corporation Finance Small Business Compliance Guides,” accessed August 26, 2015, &lt;a href="https://www.sec.gov/info/smallbus/secg.shtml"&gt;https://www.sec.gov/info/smallbus/secg.shtml&lt;/a&gt; and Australian Securities &amp;amp; Investments Commission, “Compliance for Small Business,” accessed August 26, 2015, http://asic.gov.au/for-business/your-business/small-business/compliance-for-small-business/.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref44" name="_ftn44"&gt;[44]&lt;/a&gt; See supra note 39.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref45" name="_ftn45"&gt;[45]&lt;/a&gt; See supra note 41.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref46" name="_ftn46"&gt;[46]&lt;/a&gt; See supra note 8, at 24.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref47" name="_ftn47"&gt;[47]&lt;/a&gt; See for example, James Daley, “Don’t Waste Time Reading Terms and Conditions,” &lt;i&gt;The Telegraph&lt;/i&gt;, September 3, 2014, and Robert Glancy, “Will You Read This Article about Terms and Conditions? You Really Should Do,” &lt;i&gt;The Guardian&lt;/i&gt;, April 24, 2014, sec. Comment is free, http://www.theguardian.com/commentisfree/2014/apr/24/terms-and-conditions-online-small-print-information.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref48" name="_ftn48"&gt;[48]&lt;/a&gt; See supra note 37, at 1886.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref49" name="_ftn49"&gt;[49]&lt;/a&gt; Alex Hudson, “Is Small Print in Online Contracts Enforceable?,” &lt;i&gt;BBC News&lt;/i&gt;, accessed August 26, 2015, http://www.bbc.com/news/technology-22772321.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref50" name="_ftn50"&gt;[50]&lt;/a&gt; Aleecia M. McDonald and Lorrie Faith Cranor, “Cost of Reading Privacy Policies, The,” &lt;i&gt;I/S: A Journal of Law and Policy for the Information Society&lt;/i&gt; 4 (2009 2008): 541&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref51" name="_ftn51"&gt;[51]&lt;/a&gt; See supra note 41, at 4.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref52" name="_ftn52"&gt;[52]&lt;/a&gt; For Canada, see Office of the Privacy Commissioner of Canada, “Fact Sheet: Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency,” October 23, 2013, &lt;a href="https://www.priv.gc.ca/resource/fs-fi/02_05_d_56_tips2_e.asp"&gt;https://www.priv.gc.ca/resource/fs-fi/02_05_d_56_tips2_e.asp&lt;/a&gt;. And Office of the Privacy Commissioner of Canada, “Privacy Toolkit - A Guide for Businesses and Organisations to Canada’s Personal Information Protection and Electronic Documents Act,” accessed August 26, 2015, https://www.priv.gc.ca/information/pub/guide_org_e.pdf.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For USA, see Federal Trade Commission, “Internet of Things: Privacy &amp;amp; Security in a Connected World,” Staff Report (Federal Trade Commission, January 2015), https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref53" name="_ftn53"&gt;[53]&lt;/a&gt; See supra note 37, at 1889.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref54" name="_ftn54"&gt;[54]&lt;/a&gt; See supra note 39, at 261.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref55" name="_ftn55"&gt;[55]&lt;/a&gt; Jakki Geiger, “The Surprising Link Between Hurricanes and Strawberry Pop-Tarts: Brought to You by Clean, Consistent and Connected Data,” &lt;i&gt;The Informatica Blog - Perspectives for the Data Ready Enterprise&lt;/i&gt;, October 3, 2014, http://blogs.informatica.com/2014/03/10/the-surprising-link-between-strawberry-pop-tarts-and-hurricanes-brought-to-you-by-clean-consistent-and-connected-data/#fbid=PElJO4Z_kOu.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref56" name="_ftn56"&gt;[56]&lt;/a&gt; Constance L. Hays, “What Wal-Mart Knows About Customers’ Habits,” &lt;i&gt;The New York Times&lt;/i&gt;, November 14, 2004, http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref57" name="_ftn57"&gt;[57]&lt;/a&gt; M. J. de Zwart, S. Humphreys, and B. Van Dissel, “Surveillance, Big Data and Democracy: Lessons for Australia from the US and UK,” &lt;i&gt;Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2&lt;/i&gt;, 2014, &lt;a href="https://digital.library.adelaide.edu.au/dspace/handle/2440/90048"&gt;https://digital.library.adelaide.edu.au/dspace/handle/2440/90048&lt;/a&gt;: 722.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref58" name="_ftn58"&gt;[58]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref59" name="_ftn59"&gt;[59]&lt;/a&gt; See supra note 41, at 3.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref60" name="_ftn60"&gt;[60]&lt;/a&gt; Julie E. Cohen, “What Privacy Is For,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 5, 2012), http://papers.ssrn.com/abstract=2175406.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref61" name="_ftn61"&gt;[61]&lt;/a&gt; See supra note 37, at 1901.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref62" name="_ftn62"&gt;[62]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref63" name="_ftn63"&gt;[63]&lt;/a&gt; See supra note 37, at 1899.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref64" name="_ftn64"&gt;[64]&lt;/a&gt; Jon Leibowitz, “So Private, So Public: Individuals, The Internet &amp;amp; The paradox of behavioural marketing” November 1, 2007, https://www.ftc.gov/sites/default/files/documents/public_statements/so-private-so-public-individuals-internet-paradox-behavioral-marketing/071031ehavior_0.pdf: 6.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref65" name="_ftn65"&gt;[65]&lt;/a&gt; See supra note 5.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref66" name="_ftn66"&gt;[66]&lt;/a&gt; See supra note 37, at 1898.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref67" name="_ftn67"&gt;[67]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref68" name="_ftn68"&gt;[68]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref69" name="_ftn69"&gt;[69]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref70" name="_ftn70"&gt;[70]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref71" name="_ftn71"&gt;[71]&lt;/a&gt; See supra note 41, at 3.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref72" name="_ftn72"&gt;[72]&lt;/a&gt; See supra note 39, at 261.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref73" name="_ftn73"&gt;[73]&lt;/a&gt; Richard H. Thaler, “Making It Easier to Register as an Organ Donor,” &lt;i&gt;The New York Times&lt;/i&gt;, September 26, 2009, http://www.nytimes.com/2009/09/27/business/economy/27view.html.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref74" name="_ftn74"&gt;[74]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref75" name="_ftn75"&gt;[75]&lt;/a&gt; &lt;i&gt;The Oxford Introductions to U.S. Law: Contracts&lt;/i&gt;, 1 edition (New York: Oxford University Press, 2010): 67.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref76" name="_ftn76"&gt;[76]&lt;/a&gt; Francis M. Buono and Jonathan A. Friedman, “Maximizing the Enforceability of Click-Wrap Agreements,” &lt;i&gt;Journal of Technology Law &amp;amp; Policy&lt;/i&gt; 4, no. 3 (1999), http://jtlp.org/vol4/issue3/friedman.html.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref77" name="_ftn77"&gt;[77]&lt;/a&gt; North Carolina State University, “Clickwraps,” &lt;i&gt;Software @ NC State Information Technology&lt;/i&gt;, accessed August 26, 2015, http://software.ncsu.edu/clickwraps.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref78" name="_ftn78"&gt;[78]&lt;/a&gt; Ed Bayley, “The Clicks That Bind: Ways Users ‘Agree’ to Online Terms of Service,” &lt;i&gt;Electronic Frontier Foundation&lt;/i&gt;, November 16, 2009, https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref79" name="_ftn79"&gt;[79]&lt;/a&gt; Ibid, at 2.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref80" name="_ftn80"&gt;[80]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref81" name="_ftn81"&gt;[81]&lt;/a&gt; See &lt;i&gt;Nguyen v. Barnes &amp;amp; Noble Inc&lt;/i&gt;., (9&lt;sup&gt;th&lt;/sup&gt; Cir. 2014), available &lt;a href="http://cdn.ca9.uscourts.gov/datastore/opinions/2014/08/18/12-56628.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref82" name="_ftn82"&gt;[82]&lt;/a&gt; See &lt;i&gt;Specht v. Netscape Communications Corp.&lt;/i&gt;,(2d Cir. 2002), available &lt;a href="http://cyber.law.harvard.edu/stjohns/Specht_v_Netscape.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref83" name="_ftn83"&gt;[83]&lt;/a&gt; See supra note 78, at 2.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref84" name="_ftn84"&gt;[84]&lt;/a&gt; See &lt;i&gt;In Re: Zappos.com, Inc., Customer Data Security Breach Litigation&lt;/i&gt;, No. 3:2012cv00325: pg 8 line 23-26, available &lt;a href="http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1152&amp;amp;context=historical"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref85" name="_ftn85"&gt;[85]&lt;/a&gt; See &lt;i&gt;Groff v. America Online&lt;/i&gt;, Inc., 1998, available &lt;a href="http://www.internetlibrary.com/cases/lib_case20.cfm"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref86" name="_ftn86"&gt;[86]&lt;/a&gt; Hotmail Corp. v. Van$ Money Pie, Inc., 1998, available &lt;a href="http://cyber.law.harvard.edu/property00/alternatives/hotmail.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref87" name="_ftn87"&gt;[87]&lt;/a&gt; ProCD Inc. v. Zeidenberg, (7th. Cir. 1996), available &lt;a href="https://www.law.cornell.edu/copyright/cases/86_F3d_1447.htm"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref88" name="_ftn88"&gt;[88]&lt;/a&gt; See supra note 78, at 1.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref89" name="_ftn89"&gt;[89]&lt;/a&gt; See supra note 78, at 2.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref90" name="_ftn90"&gt;[90]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref91" name="_ftn91"&gt;[91]&lt;/a&gt; Oliver Herzfeld, “Are Website Terms Of Use Enforceable?,” &lt;i&gt;Forbes&lt;/i&gt;, January 22, 2013, http://www.forbes.com/sites/oliverherzfeld/2013/01/22/are-website-terms-of-use-enforceable/.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref92" name="_ftn92"&gt;[92]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref93" name="_ftn93"&gt;[93]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref94" name="_ftn94"&gt;[94]&lt;/a&gt; See supra note 41, at 3.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref95" name="_ftn95"&gt;[95]&lt;/a&gt; Christopher Kuner et al., “The Challenge of ‘big Data’ for Data Protection,” &lt;i&gt;International Data Privacy Law&lt;/i&gt; 2, no. 2 (May 1, 2012): 47–49, doi:10.1093/idpl/ips003: 49.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref96" name="_ftn96"&gt;[96]&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref97" name="_ftn97"&gt;[97]&lt;/a&gt; See supra note 41, at 5.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref98" name="_ftn98"&gt;[98]&lt;/a&gt; See supra note 57, at 723.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref99" name="_ftn99"&gt;[99]&lt;/a&gt; Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2013), http://papers.ssrn.com/abstract=2325784: 109.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref100" name="_ftn100"&gt;[100]&lt;/a&gt; See supra note 41, at 13.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref101" name="_ftn101"&gt;[101]&lt;/a&gt; See supra note 41, at 5.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref102" name="_ftn102"&gt;[102]&lt;/a&gt; See supra note 52, Privacy Toolkit, at 14.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref103" name="_ftn103"&gt;[103]&lt;/a&gt; See supra note 41, at 6.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref104" name="_ftn104"&gt;[104]&lt;/a&gt; Siani Pearson and Marco Casassa Mont, “Sticky Policies: An Approach for Managing Privacy across Multiple Parties,” &lt;i&gt;Computer&lt;/i&gt;, 2011.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref105" name="_ftn105"&gt;[105]&lt;/a&gt; See supra note 34, at 138.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref106" name="_ftn106"&gt;[106]&lt;/a&gt; See supra note 34, at 118.&lt;/p&gt;
&lt;p&gt;&lt;a href="#_ftnref107" name="_ftn107"&gt;[107]&lt;/a&gt; See supra note 41, at 5.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref108" name="_ftn108"&gt;[108]&lt;/a&gt; See supra note 41, at 4.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/are-we-throwing-our-data-protection-regimes-under-the-bus'&gt;https://cis-india.org/internet-governance/blog/are-we-throwing-our-data-protection-regimes-under-the-bus&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rohan George</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2015-09-10T14:02:08Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/dna-amber-sinha-march-10-2016-are-we-losing-right-to-privacy-and-freedom-of-speech-on-indian-internet">
    <title>Are we Losing the Right to Privacy and Freedom of Speech on Indian Internet?</title>
    <link>https://cis-india.org/internet-governance/blog/dna-amber-sinha-march-10-2016-are-we-losing-right-to-privacy-and-freedom-of-speech-on-indian-internet</link>
    <description>
        &lt;b&gt;The article was published in DNA on March 10, 2016.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Last month, it was reported that National Security Council Secretariat (NSCS) had proposed the &lt;a href="http://www.dnaindia.com/scitech/report-watch-what-you-post-soon-govt-to-install-media-cell-to-track-counter-negative-content-online-2181460"&gt;&lt;strong&gt;&lt;span style="text-decoration: underline;"&gt;setting up of a National Media Analytics Centre&lt;/span&gt;&lt;/strong&gt;&lt;span style="text-decoration: underline;"&gt; &lt;/span&gt;&lt;/a&gt;(NMAC).  This centre’s mandate would be to monitor blogs, media channels, news  outlets and social media platforms. Sources were quoted as stating that  the centre would rely upon a tracking software built by Ponnurangam  Kumaraguru, an Assistant Professor at the Indraprastha Institute of  Information Technology in Delhi. The NMAC seems to mirror other similar  efforts in countries such as &lt;strong&gt;&lt;a rel="nofollow" href="https://www.govtrack.us/congress/bills/114/hr3654/text" target="_blank"&gt;&lt;span style="text-decoration: underline;"&gt;US&lt;/span&gt;&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;a rel="nofollow" href="https://www.thestar.com/news/canada/2013/11/29/social_media_to_be_monitored_by_federal_government.html" target="_blank"&gt;&lt;span style="text-decoration: underline;"&gt;Canada&lt;/span&gt;&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;a rel="nofollow" href="http://www.smh.com.au/technology/technology-news/data-retention-and-the-end-of-australians-digital-privacy-20150827-gj96kq.html" target="_blank"&gt;&lt;span style="text-decoration: underline;"&gt;Australia&lt;/span&gt;&lt;/a&gt;&lt;a rel="nofollow" href="http://www.smh.com.au/technology/technology-news/data-retention-and-the-end-of-australians-digital-privacy-20150827-gj96kq.html" target="_blank"&gt;&lt;span style="text-decoration: underline;"&gt; &lt;/span&gt;&lt;/a&gt;&lt;/strong&gt;and &lt;strong&gt;&lt;a rel="nofollow" href="http://www.independent.co.uk/news/uk/politics/government-awards-contracts-to-monitor-social-media-and-give-whitehall-real-time-updates-on-public-10298255.html" target="_blank"&gt;&lt;strong&gt;&lt;span style="text-decoration: underline;"&gt;UK&lt;/span&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/strong&gt;,  to monitor online content for the reasons as varied as prevention of  terrorist activities, disaster relief and criminal investigation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The NSCS, the parent body that this centre will fall under, is a part of the National Security Council, India’s highest agency looking to integrate policy-making and intelligence analysis, and advising the Prime Minister’s Office on strategic issues as well as domestic and international threats. The NSCS represents the Joint Intelligence Committee and its duties include the assessment of intelligence from the Intelligence Bureau, Research and Analysis Wing (R&amp;amp;AW) and Directorates of Military, Air and Naval Intelligence, and the coordination of the functioning of intelligence agencies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From limited reports available, it appears that the tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments. The reports say that the software will also try to determine if the comments are factually correct or not. The idea of a government agency systematically tracking social media, blogs and news outlets and categorising content as desirable and undesirable is bound to create a chilling effect on free speech online. The most disturbing part of the report suggested that the past pattern of writers’ posts would be analysed to see how often her posts fell under the negative category, and whether she was attempting to create trouble or disturbance, and appropriate feedback would be sent to security agencies based on it. Viewed alongside the recent events where actors critical of the government and holding divergent views have expressed concerns about attempts to suppress dissenting opinions, this initiative sounds even more dangerous, putting at risk individuals categorised as “negative” or “belligerent”, for exercising their constitutionally protected right to free speech.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/copy2_of_FB.jpg" alt="FB" class="image-inline" title="FB" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;i&gt;Getty Images&lt;/i&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It has been argued that the Internet is a public space, and should be treated as subject to monitoring by the government as any other space. Further, this kind of analysis does not concern itself with private communication between two or more parties but only with publicly available information. Why must we raise eyebrows if the government is accessing and analysing it for the purposes of legitimate state interests? There are two problems with this argument. First, any surveillance of communication must always be limited in scope, specific to individuals, necessary and proportionate, and subject to oversight. There are no laws passed by the Parliament in India which allow for mass surveillance measures. Such activities are being conducted through bodies like NSC which came into existence through an Executive Order and have no clear oversight mechanisms built into its functioning. A quick look at the history of intelligence and surveillance agencies in India will show that none of them have been created through a legislation. A host of surveillance agencies have come up in the last few years including the Central Monitoring System, which was set up to monitor telecommunications, and the absence of legislative pedigree translates into lack of appropriate controls and safeguards, and zero public accountability.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second and the larger issue is that the scale and level of granularity of personal information available now is unprecedented. Earlier, our communications with friends and acquaintances, our movements, our association, political or otherwise, were not observable in the manner it is today. It would be remiss to underestimate the importance of personal information merely because it exists in the public domain. The ability to act without being subject to monitoring and surveillance is key to the right to free speech and expression. While we accept the importance of free speech and the value of an open internet and newer technologies to enable it, we do not give sufficient importance to how these technologies are affecting the right to privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/Tweets.jpg" alt="Tweets" class="image-inline" title="Tweets" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Getty Images&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the last few years, the social media scene in India has been characterised by extreme polemic with epithets such as ‘bhakt’, ‘sanghi’, ‘sickular’ and ‘presstitutes’ thrown around liberally, turning political discussions into a mess of ugliness. It remains to be seen whether the NMAC intends to deal with the professional trolls who rely on a barrage of abuse to disrupt public conversations online. However, the appropriate response would not be greater surveillance, let alone a body like NMAC, with a sweeping mandate and little accountability.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Link to the original &lt;a class="external-link" href="http://www.dnaindia.com/scitech/column-are-we-losing-the-right-to-privacy-and-freedom-of-speech-on-indian-internet-2187527"&gt;here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/dna-amber-sinha-march-10-2016-are-we-losing-right-to-privacy-and-freedom-of-speech-on-indian-internet'&gt;https://cis-india.org/internet-governance/blog/dna-amber-sinha-march-10-2016-are-we-losing-right-to-privacy-and-freedom-of-speech-on-indian-internet&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Surveillance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2016-03-16T14:44:19Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/economic-times-february-20-2019-are-rss-fears-about-tik-tok-true">
    <title>Are RSS's fears about Tik Tok true? Here's what you should know</title>
    <link>https://cis-india.org/internet-governance/news/economic-times-february-20-2019-are-rss-fears-about-tik-tok-true</link>
    <description>
        &lt;b&gt;Swadeshi Jagran Manch has flagged security, business and social risks posed by Chinese apps such as TikTok. The RSS fears may not be totally unfounded.&lt;/b&gt;
        &lt;p&gt;The article was &lt;a class="external-link" href="https://economictimes.indiatimes.com/news/politics-and-nation/are-rsss-fears-about-tik-tok-true-heres-what-you-should-know/articleshow/68066972.cms"&gt;published in Economic Times&lt;/a&gt; on February 20, 2019. Shweta Mohandas was quoted. The story was also published by &lt;a class="external-link" href="https://www.moneycontrol.com/news/india/rss-calls-for-ban-on-chinese-social-media-apps-like-tik-tok-like-3562401.html"&gt;Moneycontrol News&lt;/a&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Should India let Chinese social media apps and telecom companies proliferate in India? Swadeshi Jagran Manch (SJM), the economic wing of the Rashtriya Swayamsevak Sangh has written to Prime Minister Narendra Modi for a ban on these Chinese companies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The statement comes days after the Pulwama attack by terrorists of Jaish-e-Muhammad (JeM). China has repeatedly helped Pakistan by blocking India’s efforts to get Pakistan-based JeM chief Masood Azhar listed by the UN Security Council as a global terrorist.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;SJM has flagged security, business and social risks posed by Chinese apps such as hugely popular TikTok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The RSS fears may not be totally unfounded.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;TikTok, Kwai and LIKE have been downloaded by millions of smartphone users in small town India who are using them to share personal videos, away from the glare of scrutiny that falls on more mainstream social media platforms.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In November last year, ET reviewed more than 20 Chinese video apps that dominate the mobile entertainment network of tier-2 and tier-3 cities mostly thanks to titillating videos, suggestive notifications, risqué humour and raunchy content.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Chinese apps pose several potential risks, Swetha Mohandas, policy officer at the Center for Internet and Society, an advocacy group, told ET in November last year. “The draft DP (data protection) Bill in the current stage provides greater responsibility on data fiduciaries to maintain the privacy of the individual and the security of the data,” she said. “There are a lot of questions that these apps pose with respect to the Bill, some of them being the security, the data storage provision, the personal data of children, and most importantly that these apps might have recordings that might be sensitive personal data.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most of these apps including TikTok explicitly state that though they have appropriate technical and organisational measures in place, “they cannot guarantee the security of your information transmitted through the platform”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;TikTok, the popular lip-sync app, is filled with 15-second clips of meme-friendly content featuring its youthful users miming to their favourite songs. The videos range from the harmless to the explicit, depending upon the users followed. The app has gone viral, having racked up close to 100 million downloads and with 20 million monthly active users in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While all such apps carry a disclaimer stating that they are not directed at children, their target audience encompasses preteens and adolescents in tier-2 and tier-3 cities, according to experts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Despite the rapidly growing user base, apps like TikTok don’t have a grievance redressal officer in India. The government is insisting on this for all major social media platforms.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In its letter to the PM, SJM said it was the duty of all Indians to take steps to prevent the economic gains of any nation or individual that directly or tacitly supports terrorists.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Referring to India putting economic pressure on Pakistan, SJM said, “At such a time, we believe it is imperative that the government create similar hurdles for Chinese companies that are using India for their economic gain. As has been said often, data is now considered the new oil. We should not allow Chinese companies to capture Indian user data without any restrictions and monitoring.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Bytedance's response: TikTok and Helo are committed to respecting local laws and regulations as well as maintaining a safe and positive in-app environment for our users in India. There is no basis for the factually incorrect claims raised by certain groups recently. We treat the safety and security of our user data very seriously. Moreover, we have robust measures to protect users against misuse, including easy reporting mechanisms that enable users and law enforcement to report content that violates our terms of use and community guidelines.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/economic-times-february-20-2019-are-rss-fears-about-tik-tok-true'&gt;https://cis-india.org/internet-governance/news/economic-times-february-20-2019-are-rss-fears-about-tik-tok-true&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-02-22T02:13:35Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-october-10-2018-anila-kurian-are-online-shows-obscene">
    <title>Are online shows obscene?</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-october-10-2018-anila-kurian-are-online-shows-obscene</link>
    <description>
        &lt;b&gt;Should content on online platforms such as Netflix be monitored and censored? How can they show nudity when films made for the cinema halls can’t, a petition wants to know.
&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Anila Kurian was &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/are-online-shows-obscene-697197.html"&gt;published in Deccan Herald&lt;/a&gt; on October 10, 2018. Akriti Bopanna was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Last Friday, the Bombay High Court issued a notice to the Ministry of Information and Broadcasting over a public interest case seeking regulation of content online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The petitioner, Divya Ganeshprasad Gontia, finds content on online platforms such as Netflix “vulgar and obscene.” The PIL argues that broadcasting nude or vulgar scenes in a cognisable offence under the Indian Penal Code, the Cinematograph Act, Indecent Representation of Women (Prohibition) Act of 1986 and the Information Technology Act of 2000.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Advocate Shyam Dewani, the petitioner’s lawyer in Mumbai, spoke extensively to Metrolife about the case.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“There is a falling standard when it comes to web series nowadays. It started with just movies but now, with the form becoming popular, competitors have started making other shows. The more liberal the forum became, the more obscene the content became,” he says. Many shows are now vulgar and hurt religious sentiments. There must be some restriction, he argues.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;No web series can be above the law and creators should follow guidelines, Dewani contends.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“There are extensive advertisements promoting shows on these online portals and even children have access to them. Other countries have regulations like parental control, for example, so why can’t we?” he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Shows like ‘Gandi Baat’ on ALTBalaji and ‘Sacred Games’ on Netflix feature ‘vulgar content,’ and are offensive to women, the petition alleges. One of the ways in which this could be curbed, the petitioner argues, is for the I&amp;amp;B ministry to set up a pre-screening committee for web shows, films and other content released directly on these platforms.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Even though these shows have been marked ‘18+’ as their certication, there is no authority making the certication, says Dewani.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Who are you to self-certify your own show? If content for movies also followed the pattern, there would be a huge hue and cry. So regulating it is all we are asking for,” he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Akriti Bopanna, policy officer at the Centre for Internet and Society, Bengaluru, says it is true that sections of the IPC say you can’t broadcast nudity and vulgar content. “Because online portals directly publish on the Internet, there is no one to check them. There’s a sense that since there’s no censor board, they can get away with anything," she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The audiences are different for online platforms, however.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“While we see films that are bold and good for society, there are others that are the complete opposite. There are no hard and fast rules to say if this is good or bad for freedom of speech. Right now, this could go either way,” she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Here’s what others in the entertainment industry say.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Danish Sait, Actor: Moral policing isn’t good&lt;/b&gt;&lt;br /&gt;“I’ve grown to realise that we can’t have a blanket rule to anything in our country. The online medium is the one section of society that is desperately trying to be liberal but that’s also under the scanner now. I understand that kids are exposed to and impacted by the digital world, in which case, this doesn’t seem like such a bad idea. But it also feels like everything is looked at from a moral policing point of view. Even I find some shows explicit. But as an adult, I make the choice to move on. I think this can be solved if there is parental control. So, for the rest of the world, live and let live.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Kubbra Sait, Actor (Cuckoo in Sacred Games) : They shouldn't dictate terms&lt;/b&gt;&lt;br /&gt;“Censoring online content is unfair. If the content is not something that causes any communal riot affects the peace and dynamics of our country or society, it should not be censored. We need the authorities to give us guidelines on who can consume it, but they shouldn’t dictate what the content should be.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Children are uploading videos of harming themselves, and that is not regulated. Artistes have reached where they have now because they broke barriers set in the past. If a committee starts dictating things to us, we will go back to regressive content. We used to applaud husbands slapping their wives for infidelity, and understood that two bobbing flowers was a symbol of sex. It’s 2018 and we’ve evolved. So please give us the freedom somewhere. Let me choose the content I want and watch it where I want.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Rasika Dugal, Actor: Don't curb free expression&lt;/b&gt;&lt;br /&gt;“I have never been in favour of censorship. I feel you should consume and make sense of the material according to your own sensibility. But there seems to be an understanding in society that some things need regulation. As an artiste, I am against that. Having said that, if there are already certain checks and balances in place, which aren’t curbing your freedom of expression, then it shouldn’t be a problem. I hope, when this eventually rolls out, it doesn’t become a place from which everything is looked at from a moral high ground and doesn’t take away from your freedom of expression.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Pawan Kumar, Director: Self-censorship works better&lt;/b&gt;&lt;br /&gt;“At the end of the day, censorship is a personal choice. No matter how much the government does whatever it does, people will find a way to watch what they want. By restricting like this, I think you are only affecting creative jobs. There could be good content that we might miss out on. Then again, content creators will always and newer ways to bring out their stories, whether it is adult-oriented or sensitive issue-oriented. The more you barricade it, the more new mediums will come out. It’s an on-going journey. I think what should probably be done is to educate one about self-censorship; you decide what to watch and not to.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Why the furore?&lt;/b&gt;&lt;br /&gt;Many films and series on Netflix feature explicit scenes of lovemaking. The narratives of Lust Stories and Sacred Games are spiced with scenes rarely seen before on the big screen in India. Content for web content is not screened by any authority, and therefore enjoys greater freedom than regular films, which must go through a certification process controlled by the government.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Ball in court&lt;/b&gt;&lt;br /&gt;The High Court has sought responses from the ministries of information technology, law and home affairs. They have to respond by October 31.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-october-10-2018-anila-kurian-are-online-shows-obscene'&gt;https://cis-india.org/internet-governance/news/deccan-herald-october-10-2018-anila-kurian-are-online-shows-obscene&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-10-16T15:58:40Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/livemint-november-22-2018-abhijit-ahaskar-are-connected-tech-toys-too-smart-for-their-own-good">
    <title>Are connected tech toys too smart for their own good?</title>
    <link>https://cis-india.org/internet-governance/news/livemint-november-22-2018-abhijit-ahaskar-are-connected-tech-toys-too-smart-for-their-own-good</link>
    <description>
        &lt;b&gt;Despite their merits, connected toys raise a few concerns about data privacy and security.&lt;/b&gt;
        &lt;p class="S5l" style="text-align: justify; "&gt;The article by Abhijit Ahaskar was published in &lt;a class="external-link" href="https://www.livemint.com/Technology/X3keKtXFYPKbAIHHqja2xJ/Are-connected-tech-toys-too-smart-for-their-own-good.html"&gt;Livemint&lt;/a&gt; on November 22, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="S5l" style="text-align: justify; "&gt;In  today’s connected world comprising the Internet of Things (IoT), smart  tech toys are here to stay. These toys, for instance, can make learning  fun for children and help parents keep track of their  whereabouts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CogniToys’  Dino, a case in point, uses Wi-Fi to stay connected and IBM Watson’s  natural language processing (NLP) technology to tailor its responses to  suit a child’s age group and skill level. The little connected toy can  teach children how to spell words and even admonish them if they use  expletives.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, it’s this very prowess that can raise  privacy risks too. A 2017 security audit cautioned that Dino transmitted  information without using encryption, leaving a child’s information  vulnerable. When Mozilla reached out to the company in 2018, the company  claimed, “Dino uses encryption for all audio traffic and in fact, each  one uses unique keys, which are also cycled per session per device.”  However, experts at Mozilla could not determine if the toy actually uses  encryption of any kind.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another smart toy called i-Que  Intelligent Robot by Genesis Toys, uses Bluetooth to connect to a phone  via its app, but doesn’t encrypt the pairing process, allowing anyone in  the same Bluetooth range to download the app on another smartphone,  connect to the toy and start chatting with the child.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Flying  drones are another fad with children these days. India’s new drone  policy allows users to fly anything under 250g below 50m without  requiring registration or license. Even if children are using something  like the DJI Spark for fun and taking selfies, the privacy risks can’t  be ignored. Not only have DJI Spark drones been reportedly hacked in the  past, they also lack parental controls, do not encrypt user data and  have been found to share information with third parties, according to &lt;a href="https://mzl.la/2zOK4II"&gt;Mozilla’s &lt;/a&gt;&lt;a href="https://mzl.la/2zOK4II"&gt;Privacy Not Included&lt;/a&gt;&lt;a href="https://mzl.la/2zOK4II"&gt; report&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Nevertheless,  the market for smart toys is growing. According to a study by US-based  Transparency Market Research, the smart toys market is largely  fragmented but is expected to reach $69.9 billion by 2026.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In  India, the market for smart toys is still small compared to generic  plastic toys but the demand is increasing, particularly in cities such  as New Delhi, Bangalore, Mumbai and Hyderabad, according to Vivek Goyal,  co-founder of PlayShifu—a tech start-up known for its augmented  reality-based smart toys such as the Orboot globe.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Experts point  out that since these toys use microphones, cameras, Bluetooth, Wi-Fi and  data collated from users is stored on a remote server, it makes them as  vulnerable as any other  connected device.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“As an individual  user, when you buy such toys you are giving them the right to utilise  that data. However, if there are laws and frameworks which can mandate  toy companies to have stringent privacy policies, misuse of the data can  be curtailed,” says Rohan Vaidya, regional director of sales, India, at  CyberArk, an IT security firm.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To be sure, smart toys  are  “currently regulated under 43A of IT Act and when the new data  protection laws are enacted, the new set of rules will apply to them”,  says Sunil Abraham, executive director, Centre for Internet and Society,  a Bengaluru-based research organisation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Goyal, on his part,  acknowledges the concerns that security experts and parents may have  about such devices. He believes toy makers need to be more upfront about  what data they are collecting through the app or the toy, which would  make smart toys more acceptable to parents.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/livemint-november-22-2018-abhijit-ahaskar-are-connected-tech-toys-too-smart-for-their-own-good'&gt;https://cis-india.org/internet-governance/news/livemint-november-22-2018-abhijit-ahaskar-are-connected-tech-toys-too-smart-for-their-own-good&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-12-06T02:47:03Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
