<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/internet-governance/blog/online-anonymity/search_rss">
  <title>We are anonymous, we are legion</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 471 to 485.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/symposium-on-data-privacy-and-citizens-rights"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/about/newsletters/august-2018-newsletter"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/medianama-rana-september-9-2018-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-governance-sector-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/hindu-businessline-swaraj-paul-barooah-september-7-2018-indias-post-truth-society"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-september-5-2018-surupasree-sarmmah-can-this-curb-your-addiction"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/jobs/cis-policy-officer-internet-governance"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/symposium-on-india2019s-cyber-strategy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/unescap-google-ai-meeting"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/news/symposium-on-data-privacy-and-citizens-rights">
    <title>Symposium on Data Privacy and Citizen's Rights</title>
    <link>https://cis-india.org/internet-governance/news/symposium-on-data-privacy-and-citizens-rights</link>
    <description>
        &lt;b&gt;Shweta Mohandas was a panelist at the Symposium on Data Privacy and Citizen's Rights on September 9, 2018. The Symposium was organised by the Tech Law Forum of NALSAR University of Law, Hyderabad. &lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;Concept Note&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The National Academy of Legal Studies and Research (NALSAR) University of Law, Hyderabad is organising a Symposium on DATA PRIVACY AND CITIZEN’S RIGHTS to provide multiple stakeholders one platform to discuss and deliberate on the BN Srikrishna Committee Report and Draft Bill.  &lt;br /&gt; &lt;br /&gt;The Committee headed by Retd. Justice BN Srikrishna released its Report and Draft Bill on the 27th of July, 2018. It comes at a time when there is increasing discussion about the individual privacy and surveillance by both private organisations and state authorities. Especially in light of the 9-judge Puttaswamy judgment affirming the Fundamental Right to Privacy, there was a need to concretise the right in the form of a statute. The Bill proposes an elaborate data protection framework by utilising concepts such as anonymisation, pseudonymisation, data localisation, guardian data fiduciary, among others. While the Bill has been lauded for providing a data protection framework largely similar to the one proposed by civil society, there are several areas of concern with the Bill such as the amendments suggested to the RTI Act, the impact of the Bill on Free Speech and the lack of substantial provisions regarding surveillance. There has been further criticism that the discussions regarding these issues have been conducted in silos, with little to no dialogue taking place between the various stakeholders and experts in the field.  &lt;br /&gt; &lt;br /&gt;We believe that there is a need to provide a common forum for these stakeholders to interact with each other in providing suggestions that are representative in nature and nuanced in their expression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Themes&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Privacy and Free Speech This interaction aims to examine the juxtaposition of the constitutional right to free speech and the now constitutionally affirmed right to privacy. Will a new data protection law impact the publication of leaked documents or sting operations like the Radia tapes or Tehelka’s ‘Operation Westend’? If so, how can journalists mitigate the risk of getting sued for breach of privacy?  While the jurisprudence concerning the right to privacy is in its most nascent state, it becomes important for us to explore its contours in light of already established constitutional guarantees.   &lt;br /&gt; &lt;br /&gt;Right to Information and Right to Privacy  How does the right to privacy impact the right to information? The guarantee of these two rights arise from diametrically opposite ideologies, in that privacy aims to shield from the public domain information and data concerning individuals and institutions while the right to information aims to promote transparency and disclosure of information held by the state. However, the question remains, is the existence of these two rights necessarily mutually exclusive? Will a new data protection law make it difficult to promote transparency under the Right to Information Act? Is there is a possibility of a clash between the Information Commissions and the proposed Data Protection Authority? This panel would analyze the co-existence and competitive nature of these two rights in the context of the Indian legal space.  &lt;br /&gt; &lt;br /&gt;Surveillance -  As we move towards a form of governance that is increasingly capable of surveilling individual movements and actions, it becomes extremely necessary for us to understand the nature of surveillance. Can data privacy be compromised for surveillance that may be necessary for increased safety in our physical and virtual living spaces? Are there any provisions that protects data in cases of it becoming exploitable? What is the interaction of international statutes (like ICCPR) and the latest Indian statute in terms of its recognition of necessity of surveillance in contrast to the necessity of protection of data.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/symposium-on-data-privacy-and-citizens-rights'&gt;https://cis-india.org/internet-governance/news/symposium-on-data-privacy-and-citizens-rights&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-18T15:18:37Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/about/newsletters/august-2018-newsletter">
    <title>August 2018 Newsletter</title>
    <link>https://cis-india.org/about/newsletters/august-2018-newsletter</link>
    <description>
        &lt;b&gt;CIS newsletter for the month of August 2018.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;span&gt;Dear readers,&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Previous issues of the newsletters can be &lt;a class="external-link" href="http://cis-india.org/about/newsletters"&gt;accessed here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Highlights&lt;/h2&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;India houses the second largest population in the world at approximately 1.35 billion individuals. In such a diverse and dense context, law enforcement could be a challenging job. Elonnai Hickok and Vipul Kharbanda &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/an-analysis-of-the-cloud-act-and-implications-for-india"&gt;throw light on the CLOUD Act and its implifications for India in a blog post&lt;/a&gt;. &lt;/li&gt;
&lt;li style="text-align: justify; "&gt;On August 9, 2018, the DNA Technology (Use and Application) Regulation Bill, 2018 was introduced in the Lok Sabha. CIS had commented on some key aspects of the bill in many forums earlier. Elonnai Hickok and Murali Neelakantan in an article &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/bloomberg-quint-elonnai-hickok-and-murali-neelakantan-august-20-2018-dna-evidence-only-opinion-not-science-and-definitely-not-proof-of-crime"&gt;published by Bloomberg Quint&lt;/a&gt; have voiced their opinion on the bill.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;Murali Neelakantan, Swaraj Barooah, Swagam Dasgupta and Torsha Sarkar in an &lt;a class="external-link" href="http://cis-india.org/internet-governance/blog/bloomberg-quint-murali-neelakantan-swaraj-barooah-swagam-dasgupta-torsha-sarkar-august-14-2018-national-health-stack-data-for-datas-sake-a-manmade-health-hazard"&gt;Op-ed in Bloomberg Quint&lt;/a&gt; have examined the National Health Stack, an ambitious attempt by the government to to build a digital infrastructure with a “deep understanding of the incentive structures prevalent in the Indian healthcare ecosystem. The authors have argued that collection of health data, without sensitisation and accountability, has the potential to deny healthcare to the vulnerable.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;An article titled &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/nlud-student-law-journal-sunil-abraham-mukta-batra-geetha-hariharan-swaraj-barooah-and-akriti-bopanna-indias-contribution-to-internet-governance-debates"&gt;India's Contribution to Internet Governance Debates&lt;/a&gt;, co-authored by Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah and Akriti Bopanna, was published in the NLUD Student Law Journal, an annual peer-reviewed journal published by the National Law University, Delhi. &lt;/li&gt;
&lt;li style="text-align: justify; "&gt;IT/IT-eS Sector and the Future of Work in India was organized at Omidyar Networks’ office in Bangalore, on June 29, 2018. Torsha Sarkar, Ambika Tandon and Aayush Rath &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/future-of-work-report-of-the-workshop-on-the-it-it-es-sector-and-the-future-of-work-in-india"&gt;co-authored a report&lt;/a&gt;.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;Swaraj Barooah and Gurshabad Grover &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/livemint-july-24-2018-swaraj-barooah-and-gurshabad-grover-anti-trafficking-bill-may-lead-to-censorship"&gt;co-authored an article in Livemint&lt;/a&gt; that examines a few problematic provisions in the proposed Anti-trafficking Bill. The authors say that it may lead to censorship.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;The Researchers at Work programme of CIS &lt;a class="external-link" href="https://cis-india.org/raw/call-for-essays-offline"&gt;had invited abstracts for essays&lt;/a&gt; that explore dimensions of offline lives. Selected authors are expected to submit the first draft of the essay (2000-4000 words) by Friday, October 5, 2018.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Articles&lt;/h2&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/livemint-july-24-2018-swaraj-barooah-and-gurshabad-grover-anti-trafficking-bill-may-lead-to-censorship"&gt;Anti-trafficking Bill may lead to censorship&lt;/a&gt; (Swaraj Barooah and Gurshabad Grover; Livemint; July 24, 2018). &lt;i&gt;The article was mirrored on CIS website in the month of August 2018&lt;/i&gt;.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/bloomberg-quint-august-6-2018-murali-neelakantan-swaraj-barooah-swagam-dasgupta-torsha-sarkar-national-health-stack-an-expensive-temporary-placebo"&gt;The National Health Stack: An Expensive, Temporary Placebo&lt;/a&gt; (Murali Neelakantan, Swaraj Barooah, Swagam Dasgupta, and Torsha Sarkar; Bloomberg Quint; August 6, 2018).&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/raw/indian-express-august-12-2018-nishant-shah-digital-native-double-speak"&gt;Digital Native: Double Speak&lt;/a&gt; (Nishant Shah; Indian Express; August 12, 2018).&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/nlud-student-law-journal-sunil-abraham-mukta-batra-geetha-hariharan-swaraj-barooah-and-akriti-bopanna-indias-contribution-to-internet-governance-debates"&gt;India's Contribution to Internet Governance Debates&lt;/a&gt; (Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah and Akriti Bopanna; NLUD Student Law Journal; August 16, 2018).&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/blog/bloomberg-quint-murali-neelakantan-swaraj-barooah-swagam-dasgupta-torsha-sarkar-august-14-2018-national-health-stack-data-for-datas-sake-a-manmade-health-hazard"&gt;National Health Stack: Data for Data's Sake, A Manmade Health Hazard &lt;/a&gt;(Murali Neelakantan, Swaraj Barooah, Swagam Dasgupta and Torsha Sarkar; Bloomberg Quint; August 17, 2018).&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/bloomberg-quint-elonnai-hickok-and-murali-neelakantan-august-20-2018-dna-evidence-only-opinion-not-science-and-definitely-not-proof-of-crime"&gt;DNA ‘Evidence’: Only Opinion, Not Science, And Definitely Not Proof Of Crime!&lt;/a&gt; (Elonnai Hickok and Murali Neelakantan; Bloomberg Quint; August 22, 2018).&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/raw/indian-express-august-26-2018-nishant-shah-digital-native-playing-god"&gt;Digital Native: Playing God&lt;/a&gt; (Nishant Shah; Indian Express; August 26, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h2&gt;CIS in the News&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai"&gt;UNDP joins Tech Giants in Partnership on AI&lt;/a&gt; (UNDP; August 1, 2018). CIS is one of the partners.&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/livemint-august-3-2018-uidai-says-asked-nobody-to-add-the-helpline-number-to-contacts"&gt;UIDAI says asked nobody to add the helpline number to contacts&lt;/a&gt; (Komal Gupta; Livemint; August 3, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/economic-times-august-10-2018-mugdha-variyar"&gt;How Chinese apps are making inroads in Indian small towns&lt;/a&gt; (Mugdha Variyar; Economic Times; August 10, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/factor-daily-anand-murali-august-13-2018-the-big-eye"&gt;The Big Eye: The tech is all ready for mass surveillance in India&lt;/a&gt; (Anand Murali; Factor Daily; August 13, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/hindustan-times-august-21-2018-centre-draws-red-lines-for-whatsapp-over-fake-news-says-must-comply-with-indian-laws"&gt;Centre draws red lines for Whatsapp over fake news, says must comply with Indian laws&lt;/a&gt; (Nakul Sridhar; Hindustan Times; August 21, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/the-straits-times-august-24-2018-debarshi-dasgupta-india-steps-up-vigilance-against-whatsapp-abuse"&gt;India steps up vigilance against WhatsApp abuse&lt;/a&gt; (Debashree Dasgupta; Straits Times; August 24, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state"&gt;India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board&lt;/a&gt; (Paul Bluementhal and Gopal Sathe; Huffington Post; August 25, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future"&gt;20 years of Google: Privacy, fake news and the future&lt;/a&gt; (Rachel Lopez; Hindustan Times; August 26, 2018).&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;a href="http://cis-india.org/a2k"&gt;Access to Knowledge&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Our Access to Knowledge programme currently consists of two projects.  The Pervasive Technologies project, conducted under a grant from the  International Development Research Centre (IDRC), aims to conduct  research on the complex interplay between low-cost pervasive  technologies and intellectual property, in order to encourage the  proliferation and development of such technologies as a social good. The  Wikipedia project, which is under a grant from the Wikimedia  Foundation, is for the growth of Indic language communities and projects  by designing community collaborations and partnerships that recruit and  cultivate new editors and explore innovative approaches to building  projects.&lt;/p&gt;
&lt;h3&gt;Wikipedia&lt;/h3&gt;
&lt;p&gt;&lt;b&gt;Blog Entry&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/c2ec3fc38c3fc2ec3f-c2ac24c4dc30c3fc15-c17c4dc30c02c25c3ec32c2fc02c32c4b-c24c46c32c41c17c41-c35c3fc15c40c2ac40c21c3fc2fc28c4dc32-c15c3ec30c4dc2fc15c4dc30c2ec02"&gt;మిసిమి పత్రిక గ్రంథాలయంలో తెలుగు వికీపీడియన్ల కార్యక్రమం&lt;/a&gt; (Pavan Santhosh; August 22, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;b&gt;Events Organized&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/partnership-activity-in-annamayya-library-guntur"&gt;వికీపీడియా:సమావేశం/గుంటూరు/అన్నమయ్య గ్రంథాలయం - భాగస్వామ్య కార్యక్రమం జూలై 2018&lt;/a&gt; (Organized by CIS-A2K; Annamaya Library; Guntur; July 10, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/workshop-of-publishers-and-writers-on-unicode-open-source-and-wikimedia-projects"&gt;Workshop of Publishers and Writers on Unicode, Open Source and Wikimedia Projects&lt;/a&gt; (Organized by CIS-A2K; Pune; July 25, 2018). &lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/workshop-of-river-activists-for-building-jal-bodh-knowledge-resource-on-water"&gt;Workshop of River activists for building Jal Bodh - Knowledge resource on Water&lt;/a&gt; (Organized by CIS-A2K; Pune; July 25, 2018). &lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/tumakur%20university-workshop"&gt;ವಿಕಿಪೀಡಿಯ:ಸಂಪಾದನೋತ್ಸವಗಳು/ಸಂಪಾದನೋತ್ಸವ ತುಮಕೂರು ವಿಶ್ವವಿದ್ಯಾನಿಲಯ ೨೦೧೮ &lt;/a&gt;(Organized by CIS-A2K; Tumakur University; July 25, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://meta.wikimedia.org/wiki/Intensive_Personalised_Wiki_Training_Session_at_Pune"&gt;Intensive Personalised Wiki Training Session at Pune&lt;/a&gt; (Organized by CIS-A2K; August 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://meta.wikimedia.org/wiki/Wikisource_and_Wiki_technical_session_at_MKCL,_Pune"&gt;Wikisource and Wiki technical session at MKCL&lt;/a&gt; (Organized by CIS-A2K; Pune; August 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://meta.wikimedia.org/wiki/Wiki_technical_orientation_session_with_PyLadies_group_at_Cummins_College_of_Engineering,_Pune"&gt;Wiki technical orientation session with PyLadies group&lt;/a&gt; (Organized by CIS-A2K; Cummins College of Engineering, Pune; August 7, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://meta.wikimedia.org/wiki/Indian_Independence_Struggle_Edit-a-thon_on_Marathi_Wikipedia"&gt;Indian Independence Struggle Edit-a-thon on Marathi Wikipedia&lt;/a&gt; (Organized by CIS-A2K; August 10 - 20, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;b&gt;Event Participation&lt;/b&gt;&lt;/div&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/partnership-discussions-with-misimi-telugu-monthly-magazine"&gt;వికీపీడియా:సమావేశం/హైదరాబాదు/మిసిమి పత్రిక భాగస్వామ్య సమావేశం, జూలై 2018&lt;/a&gt; (July 24, 2018). CIS-A2K held partnership discussions with Misimi Telugu monthly magazine. &lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;Note: &lt;i&gt;Event reports for all these were published in the month of August 2018&lt;/i&gt;.&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;h2&gt;&lt;a href="http://cis-india.org/internet-governance"&gt;Internet Governance&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;As part of its research on privacy and free speech, CIS is engaged with  two different projects. The first one (under a grant from Privacy  International and IDRC) is on surveillance and freedom of expression  (SAFEGUARDS). The second one (under a grant from MacArthur Foundation)  is on restrictions that the Indian government has placed on freedom of  expression online.&lt;/p&gt;
&lt;h3&gt;Privacy&lt;/h3&gt;
&lt;p&gt;&lt;b&gt;Blog Entries&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/use-of-visuals-and-nudges-in-privacy-notices"&gt;Use of Visuals and Nudges in Privacy Notices&lt;/a&gt; (Saumyaa Naidu; edited by Elonnai Hickok and Amber Sinha; August 18, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/an-analysis-of-the-cloud-act-and-implications-for-india"&gt;An Analysis of the CLOUD Act and Implications for India&lt;/a&gt; (Elonnai Hickok and Vipul Kharbanda; August 22, 2018).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/consumer-care-society-silver-jubilee-year-celebrations"&gt;Consumer Care Society: Silver Jubilee Year Celebrations&lt;/a&gt; (Arindrajit Basu; August 27, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;b&gt;Event Participation&lt;/b&gt;&lt;/div&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment"&gt;Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment&lt;/a&gt; (Organized by Indian Council for Research on International Economic Relations and Centre for Communication Governance at National Law University - Delhi; India International Centre; New Delhi; August 24, 2018). Shweta Mohandas was a panelist at the event.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h3&gt;Free Speech &amp;amp; Expression&lt;/h3&gt;
&lt;p&gt;&lt;b&gt;Blog Entry&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/icann-response-to-didp-31-on-diversity"&gt;ICANN response to DIDP #31 on diversity&lt;/a&gt; (Akriti Bopanna and Akash Sriram; August 21, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;b&gt;Event Participation&lt;/b&gt;&lt;/div&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/feminist-information-infrastructure-workshop-with-blank-noise-and-sangama"&gt;Feminist Information Infrastructure Workshop with Blank Noise and Sangama&lt;/a&gt; (Organized by Sangama and Blank Noise; CIS, Bangalore; August 8, 2018). Akriti Bopanna, Swaraj Paul Barooah and Ambika Tandon conducted the workshop.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/summer-school-on-disinformation"&gt;Summer School on Disinformation&lt;/a&gt; (Organized by Digital Asia Hub, Hans-Bredow-Institut, University of Hamburg, Institute for Technology &amp;amp; Society of Rio de Janeiro - ITS Rio and Berkman Klein Center for Internet and Society at Harvard University; Azure Room, Pullman, Jakarta; August 22 - 24, 2018). Sunil Abraham made a presentation on Disinformation and Online Recruitment.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018"&gt;World Library and Information Congress 2018&lt;/a&gt; (Organized by International Federation of Library Associations and Institutions; Kuala Lumpur; August 26 - 27, 2018). Swaraj Paul Barooah was a speaker at two panels. Swaraj's first panel, titled "Intellectual Freedom in a Polarised World" was selected as one of 9 sessions to be live-streamed and recorded, out of 249 sessions in total. The recording can be accessed on &lt;a class="external-link" href="https://www.youtube.com/watch?v=0HujFHQn1zY"&gt;YouTube&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h3&gt;Information Technology&lt;/h3&gt;
&lt;p&gt;&lt;b&gt;Blog Entry&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/future-of-work-report-of-the-workshop-on-the-it-it-es-sector-and-the-future-of-work-in-india"&gt;Future of Work: Report of the ‘Workshop on the IT/IT-eS Sector and the Future of Work in India’&lt;/a&gt; (Torsha Sarkar, Ambika Tandon and Aayush Rath; edited by Elonnai Hickok. Akash Sriram and Divya Kushwaha; August 16, 2018).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;span style="text-align: justify; "&gt;&lt;span style="text-align: justify; "&gt;&lt;a href="http://cis-india.org/raw"&gt;Researchers at Work&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style="text-align: justify; "&gt;&lt;span style="text-align: justify; "&gt;The Researchers at Work (RAW) programme is an interdisciplinary research initiative driven by an emerging need to understand the reconfigurations of social practices and structures through the Internet and digital media technologies, and vice versa. It aims to produce local and contextual accounts of interactions, negotiations, and resolutions between the Internet, and socio-material and geo-political processes:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span style="text-align: justify; "&gt;&lt;span style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/raw/call-for-essays-offline"&gt;Call for Essays: Offline&lt;/a&gt; (P.P. Sneha; August 6, 2018). Selected authors are expected to submit the first draft of the essay (2000-4000 words) by Friday, October 5, 2018.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;&lt;span style="text-align: justify; "&gt;&lt;span style="text-align: justify; "&gt;&lt;span style="text-align: justify; "&gt;&lt;a href="http://cis-india.org/"&gt;About CIS&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The Centre for Internet and  Society (CIS) is a non-profit organisation that undertakes  interdisciplinary research on internet and digital technologies from  policy and academic perspectives. The areas of focus include digital  accessibility for persons with disabilities, access to knowledge,  intellectual property rights, openness (including open data, free and  open source software, open standards, open access, open educational  resources, and open video), internet governance, telecommunication  reform, digital privacy, and cyber-security. The academic research at  CIS seeks to understand the reconfigurations of social and cultural  processes and structures as mediated through the internet and digital  media technologies.&lt;/p&gt;
&lt;p&gt;► Follow us elsewhere&lt;/p&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li&gt;Twitter:&lt;a href="http://twitter.com/cis_india"&gt; http://twitter.com/cis_india&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Access to Knowledge: &lt;a href="https://twitter.com/CISA2K"&gt;https://twitter.com/CISA2K&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Information Policy: &lt;a href="https://twitter.com/CIS_InfoPolicy"&gt;https://twitter.com/CIS_InfoPolicy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Facebook - Access to Knowledge:&lt;a href="https://www.facebook.com/cisa2k"&gt; https://www.facebook.com/cisa2k&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;E-Mail - Access to Knowledge: &lt;a&gt;a2k@cis-india.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;E-Mail - Researchers at Work: &lt;a&gt;raw@cis-india.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;List - Researchers at Work: &lt;a href="https://lists.ghserv.net/mailman/listinfo/researchers"&gt;https://lists.ghserv.net/mailman/listinfo/researchers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;p&gt;► Support Us&lt;/p&gt;
&lt;div&gt;Please help us defend consumer and citizen rights on the Internet!  Write a cheque in favour of 'The Centre for Internet and Society' and  mail it to us at No. 194, 2nd 'C' Cross, Domlur, 2nd Stage, Bengaluru -  5600 71.&lt;/div&gt;
&lt;p&gt;► Request for Collaboration&lt;/p&gt;
&lt;div&gt;
&lt;p style="text-align: justify; "&gt;We invite researchers, practitioners, artists, and theoreticians,  both organisationally and as individuals, to engage with us on topics  related internet and society, and improve our collective understanding  of this field. To discuss such possibilities, please write to Sunil  Abraham, Executive Director, at sunil@cis-india.org (for policy research), or Sumandro Chattapadhyay, Research Director, at sumandro@cis-india.org (for  academic research), with an indication of the form and the content of  the collaboration you might be interested in. To discuss collaborations  on Indic language Wikipedia projects, write to Tanveer Hasan, Programme  Officer, at &lt;a&gt;tanveer@cis-india.org&lt;/a&gt;.&lt;/p&gt;
&lt;div style="text-align: justify; "&gt;&lt;i&gt;CIS is grateful to its primary donor the Kusuma Trust founded  by Anurag Dikshit and Soma Pujari, philanthropists of Indian origin for  its core funding and support for most of its projects. CIS is also  grateful to its other donors, Wikimedia Foundation, Ford Foundation,  Privacy International, UK, Hans Foundation, MacArthur Foundation, and  IDRC for funding its various projects&lt;/i&gt;.&lt;/div&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/about/newsletters/august-2018-newsletter'&gt;https://cis-india.org/about/newsletters/august-2018-newsletter&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-09-16T05:08:39Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/medianama-rana-september-9-2018-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges">
    <title>#NAMAprivacy: Data Protection Authority's regulatory and enforcement challenges</title>
    <link>https://cis-india.org/internet-governance/news/medianama-rana-september-9-2018-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges</link>
    <description>
        &lt;b&gt;This is the second post in our series covering our events in Delhi and Bangalore on India’s Data Protection Law.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Rana was published in &lt;a class="external-link" href="https://www.medianama.com/2018/09/223-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges/"&gt;Medianama&lt;/a&gt; on September 9, 2018. Amber Sinha was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;“The Data Protection Authority of India, as it stands, performs legislative, executive and judicial functions. It’s not a bad thing,” said Alok Prasanna Kumar, Senior Resident Fellow at the Vidhi Centre for Legal Policy at the #NAMAprivacy discussion on the data protection bill in Bangalore last week. “But unlike other regulators, the DPA’s ambit is vast. It could potentially deal with every kind of company. So, there’s no way one entity could do this in regards to efficacy and no single entity should do it either.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That was one of the many challenges that our panelists have suggested  that the regulator may face when it is established. Panelists, however,  were largely unsure on how the proposed regulator will impact consumers  or businesses, given that the most regulations are yet to be defined in  the Personal Data Protection Bill, 2018. To this extent, Renuka Sane,  Associate Professor at the National Institute of Public Finance and  Policy (NIPFP) said, “On most questions about this law, I would have one  answer, that it is too early to say anything. We will have to wait and  see how it will evolve.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To reiterate, the draft bill, 2018 proposed establishing a regulatory  body that will implement and oversee the data protection law in the  country; the Data Protection Authority of India (DPA). The regulatory  body will be empowered to impose penalties on data fiduciaries, accept  complaints from data principals, prevent misuse of personal data,  determine if the data protection law has been violated, and promote  awareness of data protection. The authority will consist of six  whole-time members and a chairperson, to be appointed by the central  government, based on the recommendations of a selection committee that  includes the Chief Justice of India (CJI), the Cabinet secretary and one  CJI nominated expert.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The following are some of the key points made in both, Delhi and  Bengaluru. Please note that these points are not necessarily listed in  the order they were made and are not verbatim excerpts of the speakers’  remarks. We’ve edited them for brevity.&lt;/p&gt;
&lt;h3&gt;Regulatory and Enforcement by the DPA&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;Tasks to be undertaken&lt;/b&gt;: There are four main functions that the DPA has to undertake at some point of time – &lt;ol&gt;
&lt;li&gt;The DPA will have to issues licenses to some players&lt;/li&gt;
&lt;li&gt;It will have to come up with regulations as there are several places in the Act (Bill) that will be determined by regulations,&lt;/li&gt;
&lt;li&gt;It will have come up with some sort of monitoring mechanism to gauge  if you are abiding by the regulations are not and iv. It will have to  determine violations and undertake enforcement actions&lt;b&gt;. &lt;/b&gt;(Renuka Sane)&lt;/li&gt;
&lt;/ol&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;To increase transparency and credibility&lt;/b&gt;:  Regulators have to demonstrate what is the problem that they are trying  to solve before passing a regulation. Is the solution they are opting  for, the most appropriate way of solving the problem? Have they  considered all the available alternative solutions? They need to hold  public consultations on all these issues in a transparent manner. Unless  all these things are embedded in the law, we are not going to make much  progress on the DPA. (Renuka Sane)&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;Regulatory balance&lt;/b&gt;: The regulators in India need to  merge the two sides of responsive theory – compliance theory, where we  put a lot of faith in businesses to self-regulate and comply with  processes, with dissonance theory, where we have punishments, fines and  criminal enforcement for noncompliance. (Amber Sinha, Senior Programme  Manager at Centre for Internet and Society (CIS))If a DPA were to come  in today and regulate everybody who is dealing with personal data at a  significant level, there are more than 600 million entities that they  have to regulate. (Beni Chugh, Dvara)&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;Accountability:&lt;/b&gt; When you create an extremely  powerful agency like the DPA, you will have to put in place a system of  regulatory governance, where the DPA is held accountable for its actions  or else you will exhaustipate the asymmetry of power between the  regulator and the regulated. (Renuka Sane)&lt;b&gt;&lt;br /&gt; &lt;/b&gt;&lt;br /&gt; One big feature, which has become a standard practice across regulators,  that is missing in the DRA is a reporting board structure, where you  are internally accountable to the management board and externally, you  are accountable through self-reporting mechanisms. The functioning of  the Chairperson is not defined well enough for us to see if there is  enough internal accountability at the organisation. The internal  governance of the regulatory body is what can improve the outcomes of  the regulations. (Beni Chugh).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Penalties for violation of privacy laws&lt;/h3&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;Criminal penalties: &lt;/b&gt;According to me, the threshold  for a criminal offence is low in this bill. If the law were to be  implemented today, a vast majority of the businesses would be criminally  charged. There are three provision in the bill that deal with criminal  penalties, they essentially deal with data processors breaching  individual rights in a reckless or in a grossly negligent fashion. There  are legal standards on how to construed ‘reckless’ behavior,  particularly from the domain of tort law. However, what will trigger an  enforcement action is still kind of open to speculation because the  language of the bill open to interpretation.  (Amber Sinha, CIS)&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;b&gt;The bill enables the Data Protection Authorities  to impose penalties of up to Rs 15 crores or 4% of the annual global  turnover, whichever is higher, for violating privacy laws.&lt;/b&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;Penalties for govt authorities: &lt;/b&gt;Even if you levy a  heavy fine on a government authority for breaching any laws, it’s you  and I who will be paying for their fault, because its ultimately going  from the Budget. I think that’s where the criminal offense part of it  becomes important. You can hold people personally liable. (Beni Chugh)An  individual liability on a government official or secretary may be the  way to go and I find that the bill has that provision In (Bill) 96 (3).  (a member of the audience)I think that there are several exceptions  given to the state and perhaps that will make it more difficult to  define whether there has been a violation by the state. (Renuka Sane)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Impact on consumers and businesses&lt;/h3&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;b&gt;Onerous task for consumers&lt;/b&gt;: The problem with the  bill is that it assumes a lot of active understanding of the law. For a  consumer to file a grievance, she has to say that there was a violation  and it is likely to (or) has caused her harm. But, since harm is not  well defined, how are you going to file a grievance? (Beni Chugh)&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Uncertainty over regulations&lt;/b&gt;: I’m uncertain about  the impact the bill would have on businesses because many of the  obligations that one needs to abide by, are not well defined. (Beni  Chugh)&lt;/li&gt;
&lt;li&gt;Bill will become less ambiguous once the DRA creates regulations. (Renuka Sane)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Other notes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;One thing that the bill does fairly well is defining the obligations of a data processor. (Amber Sinha)&lt;/li&gt;
&lt;li&gt;There are certain discrepancies which exist between the approach  that the report seems to espouse and what is actually reflected in the  Bill. (Amber Sinha)&lt;/li&gt;
&lt;li&gt;On most questions about this law, I would have one answer that it is  too early to say and we will see how it will evolve. (Renuka Sane)&lt;/li&gt;
&lt;li&gt;There are various metrics based on which you can define if the DRA  is an independent organisation. Based on few of them, it could be  independent, but based on others, it could not be. (Renuka Sane)&lt;/li&gt;
&lt;li&gt;I think that there are several exceptions given to the state and  perhaps that will make it more difficult to define whether there has  been a violation by the state. (Renuka Sane)&lt;/li&gt;
&lt;li&gt;I predict that the DPA will treat NPCI as any other fiduciary, even  if the data it processes will be marked as critical. (Manasa  Venkataraman, Associate Fellow, The Takshashila Institution)&lt;/li&gt;
&lt;li&gt;In the EU, they have had the luxury of spending 10 years (on GDPR)  because they already had a data protection law. But for us, we never had  one, this is the first one. So, in that sense, it is definitely much  more urgent for us. We have to get it right, we can’t rush it but there  is much greater urgency in our jurisdiction. (Amber Sinha)&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/medianama-rana-september-9-2018-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges'&gt;https://cis-india.org/internet-governance/news/medianama-rana-september-9-2018-namaprivacy-data-protection-authoritys-regulatory-and-enforcement-challenges&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-09-14T12:26:16Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-governance-sector-in-india">
    <title>Artificial Intelligence in the Governance Sector in India</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-governance-sector-in-india</link>
    <description>
        &lt;b&gt;The use of Artificial Intelligence has the potential to ameliorate several existing structural inefficiencies in the discharge of governmental functions. Our research indicates that the deployment of this technology across sub-sectors is still on the horizons.&lt;/b&gt;
        &lt;p&gt;Ecosystem Mapping:Shweta Mohandas and Anamika Kundu &lt;br /&gt;Edited by: Amber Sinha, Pranav MB and Vishnu Ramachandran&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Much of the technological capacity and funding for AI in governance in India is coming from the private sector - a trend we expect will continue as the government engages in an increasing number of partnerships with both start-ups and large corporations alike. While there is considerable enthusiasm and desire by the government to develop AI-driven solutions in governance, including the release of two reports identifying the broad contours of India’s AI strategy, this enthusiasm is yet to be underscored by adequate financial, infrastructural, and technological capacity. This gap provides India with a unique opportunity to understand some the of the ethical, legal and technological hurdles faced by the West both during and after the implementation of similar technology and avoid these challenges when devising its own AI strategy and regulatory policy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The case study identified five sub-sectors including law enforcement, education, defense, discharge of governmental functions and also considered the implications of AI in judicial decision-making processes that have been used in the United States. After mapping the uses of AI in various sub-sectors, this report identifies several challenges to the deployment of this technology. This includes factors such as infrastructural and technological capacity, particularly among key actors at the grassroots level, lack of trust in AI driven solutions and adequate funding. We also identified several ethical and legal concerns that policy-makers must grapple with. These include over-dependence on AI systems, privacy and security, assignment of liability, bias and discrimination both in process and outcome, transparency and due process. Subsequently, this report can be considered as a roadmap for the future of AI in India by tracking corresponding and emerging developments in other parts of the world. In the final section of the report, we propose several recommendations for policy-makers and developers that might address some of the challenges and ethical concerns identified. Some of these include benchmarks for the use of AI in the public sector, development of standards of explanation, a standard framework for engagement with the private sector, leveraging AI as a field to further India’s international strategy, developing adequate standards of data curation, ensuring that the benefits of the technology reaches the lowest common denominator, adopting interdisciplinary approaches to the study of Artificial Intelligence and    developing fairness,transparency and due process through the contextual application of a rules-based system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is crucial that policy-makers do not adopt a ‘one-size-fits-all’ approach to AI regulation but consider all options within a regulatory spectrum that considers the specific impacts of the deployment of this technology for each sub-sector within governance - with the distinction of public sector use. Given that the governance sector has potential implications for the fundamental rights of all citizens, it is also imperative that the government does not shy away from its obligation to ensure the fair and ethical deployment of this technology while also ensuring the existence of robust redress mechanisms. To do so, it must chart out a standard rules-based system that creates guidelines and standards for private sector development of AI solutions for the public sector. As with other emerging technology, the success of Artificial intelligence depends on whether it is deployed with the intention of placing greater regulatory scrutiny on the daily lives of individuals or for harnessing individual potential that augment rather than counter the core tenets of constitutionalism and human dignity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Read the full report &lt;a href="https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf"&gt;here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-governance-sector-in-india'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-governance-sector-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Elonnai Hickok</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-14T11:37:58Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/hindu-businessline-swaraj-paul-barooah-september-7-2018-indias-post-truth-society">
    <title>India’s post-truth society</title>
    <link>https://cis-india.org/internet-governance/blog/hindu-businessline-swaraj-paul-barooah-september-7-2018-indias-post-truth-society</link>
    <description>
        &lt;b&gt;The proliferation of lies and manipulative content supplies an ever-willing state a pretext to step up surveillance.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The op-ed was published in &lt;a class="external-link" href="https://www.thehindubusinessline.com/opinion/deconstructing-the-20-society/article24895705.ece"&gt;Hindu Businessline&lt;/a&gt; on September 7, 2018.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;After a set of rumours spread over WhatsApp triggered a series of  lynchings across the country, the government recently took the  interesting step of placing the responsibility for this violence on  WhatsApp. This is especially noteworthy because the party in power, as  well as many other political parties, have taken to campaigning over  social media, including using WhatsApp groups in a major way to spread  their agenda and propaganda.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;After all, a simple tweet or message  could be shared thousands of times and make its way across the country  several times, before the next day’s newspaper is out. Nonetheless,  while the use of social media has led to a lot of misinformation and  deliberately polarising ‘news’, it has also helped contribute to  remarkable acts of altruism and community, as seen during the recent  Kerala floods.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the government has taken a seemingly  techno-determinist view by placing responsibility on WhatsApp, the  duality of very visible uses of social media has led to others viewing  WhatsApp and other internet platforms more as a tool, at the mercy of  the user. However, as historian Melvin Kranzberg noted, “technology is  neither good nor bad; nor is it neutral”. And while the role of  political and private parties in spreading polarising views should be  rigorously investigated, it is also true that these internet platforms  are creating new and sometimes damaging structural changes to how our  society functions. A few prominent issues are listed below:&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Fragmentation of public sphere&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Jurgen  Habermas, noted sociologist, conceptualised the Public Sphere as being  “a network for communicating information and points of view, where the  streams of communication are, in the process, filtered and synthesised  in such a way that they coalesce into bundles of topically specified  public opinions”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To a large extent, the traditional gatekeepers  of information flow, such as radio, TV and mainstream newspapers,  performed functions enabling a public sphere. For example, if a  truth-claim about an issue of national relevance was to be made, it  would need to get an editor’s approval.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In case there was a  counter claim, that too would have to pass an editorial check. Today  however, nearly anybody can become a publisher of information online,  and if it catches the right ‘influencer’s attention, it could spread far  wider and far quicker than it would’ve in traditional media. While this  does have the huge positive of giving space to more diverse viewpoints,  it also comes with two significant downsides.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;First, that it  gives a sense of ‘personal space’ to public speech. An ordinary person  would think a few times, do some research, and perhaps practice a speech  before giving it before 10,000 people. An ordinary person would also  think for perhaps five seconds before putting out a tweet on the very  same topic, despite now having a potentially global audience.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Second,  by having messages sent directly to your hand-held device, rather than  open for anyone to fact-check and counter, there is less transparency  and accountability for those who send polarising material and  misinformation. How can a mistaken and polarising view be countered, if  one doesn’t even know it is being made? And if it can’t be countered,  how can its spread by contained?&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;The attention market&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Not  only is that earlier conception of public sphere being fragmented, these  new networked public spheres are also owned by giant corporations. This  means that these public spheres where critical discourse is being  shaped and spread, are actually governed by advertisement-financed  global conglomerates. In a world of information overflow, and privately  owned, ad-financed public spheres, the new unit of currency is  attention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is in the direct interest of the Facebooks and  Googles of the world, to capture user attention as long as possible,  regardless of what type of activity that encourages. It goes without  saying that neither the ‘mundane and ordinary’, nor the ‘nuanced and  detailed’ capture people’s attention nearly as well as the sensational  and exciting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Nearly as addicting, studies show, are the  headlines and viewpoints which confirm people’s biases. Fed by  algorithms that understand the human desire to ‘fit in’, people are  lowered into echo chambers where like-minded people find each other and  continually validate each other. When people with extremist views are  guided to each other by these algorithms, they not only gather  validation, but also now use these platforms to confidently air their  views — thus normalising what was earlier considered extreme. Needless  to say, internet platforms are becoming richer in the process.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Censorship by obfuscation&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Censorship  in the attention economy, no longer requires blocking of views or  interrupting the transmission of information. Rather, it is sufficient  to drown out relevant information in an ocean of other information. Fact  checking news sites face this problem. Regardless of how often they  fact-check speeches by politicians, only a minuscule percentage of the  original audience comes to know about, much less care about the  corrections.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Additionally, repeated attacks (when baseless) on  credibility of news sources causes confusion about which sources are  trustworthy. In her extremely insightful book “Twitter and Tear Gas”,  Prof Zeynep Tufekci rightly points out that rather than traditional  censorship, powerful entities today, (often States) focus on  overwhelming people with information, producing distractions, and  deliberately causing confusion, fear and doubt. Facts, often don’t  matter since the goal is not to be right, but to cause enough confusion  and doubt to displace narratives that are problematic to these powers.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Viewpoints  from members of groups that have been historically oppressed, are  especially harangued. And those who are oppressed tend to have less  time, energy and emotional resources to continuously deal with online  harassment, especially when their identities are known and this  harassment can very easily spill over to the physical world.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Conclusion&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Habermas  saw the ideal public sphere as one that is free of lies, distortions,  manipulations and misinformation. Needless to say, this is a far cry  from our reality today, with all of the above available in unhealthy  doses. It will take tremendous effort to fix these issues, and it is  certainly no longer sufficient for internet platforms to claim they are  neutral messengers. Further, whether the systemic changes are understood  or not, if they are not addressed, they will continue to create and  expand fissures in society, giving the state valid cause for intervening  through backdoors, surveillance, and censorship, all actions that  states have historically been happy to do!&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/hindu-businessline-swaraj-paul-barooah-september-7-2018-indias-post-truth-society'&gt;https://cis-india.org/internet-governance/blog/hindu-businessline-swaraj-paul-barooah-september-7-2018-indias-post-truth-society&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>swaraj</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    

   <dc:date>2018-09-12T12:16:31Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-september-5-2018-surupasree-sarmmah-can-this-curb-your-addiction">
    <title>Can this curb your addiction?</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-september-5-2018-surupasree-sarmmah-can-this-curb-your-addiction</link>
    <description>
        &lt;b&gt;Facebook and Instagram also set to roll out tools to tell you how hooked you are to social media browsing.
&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Surupasree Sarmmah was published in &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/can-curb-your-addiction-691237.html"&gt;Deccan Herald&lt;/a&gt; on September 5, 2018. Swaraj Paul Barooah was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;YouTube has rolled out a new feature that helps you manage the time  you spend on it. The feature, it says, is an attempt to allow users to  take charge of their digital life.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;You can now go to your YouTube  profile and get details of time spent on the app today, yesterday and  the past week. You also get a daily average under the tab ‘Time  Watched’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;YouTube now offers tools that remind you to ‘take a break’ from notifications. You can also disable sound for a specific period.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Kala  Balasubramanian, counselling psychologist and psychotherapist at Inner  Dawn Counselling and Training services LLP, categorises social media  users into three types: people who use it extensively with awareness,  people who use it extensively without awareness, and people addicted to  it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“If you fall into the second category, this feature will make  no or very less impact on your social media usage. However, if you know  you are spending too much time and want to get out of it, keeping a tab  on the usage can be beneficial,” says Kala.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The addicts need help,  she advises. Such well-being tools being introduced by social media  giants like Google can be seen as a recognition of the extent of damage  excessive social media usage can cause, she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Social media  should not be used to the point of damage to ourselves and our  relationships, but users must be ready to help themselves, she urges.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“This  message necessarily needs to be learned at a social level in our  families, schools and colleges and workplaces too. The tool can only  attempt to help us, we need to help ourselves,” she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Not just  YouTube, but Facebook and Instagram have also announced they would soon  add controls to help people measure how much time they are spending on  these sites.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Swaraj Barooah, policy director, Centre for Internet and Society, Bengaluru, says the feature is a “weak step.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“I  have tried out the feature and it is more like the snooze button of an  alarm: one can dismiss it immediately or change the settings. Even to  get to the settings takes an active effort. People by default will go  with default settings,” he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He sees the feature as an  ‘eyewash’: social media giants can now claim they have done something to  curb addiction without actually doing anything effective.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The algorithm feeds on people’s vulnerability. It would be better to see these platforms offering more transparency in how  algorithms are viewing people so that people can choose what they want  to see and not what the algorithms are determining for them,” he says.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-september-5-2018-surupasree-sarmmah-can-this-curb-your-addiction'&gt;https://cis-india.org/internet-governance/news/deccan-herald-september-5-2018-surupasree-sarmmah-can-this-curb-your-addiction&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-10-03T14:15:33Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda">
    <title>AI in India: A Policy Agenda</title>
    <link>https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-in-india-a-policy-agenda"&gt;Click&lt;/a&gt; to download the file&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;h1 style="text-align: justify; "&gt;Background&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;Over the last few months, the Centre for Internet and Society has been engaged in the mapping of use and impact of artificial intelligence in health, banking, manufacturing, and governance sectors in India through the development of a case study compendium.&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Alongside this research, we are examining the impact of Industry 4.0 on jobs and employment and questions related to the future of work in India. We have also been a part of several global conversations on artificial intelligence and autonomous systems. The Centre for Internet and Society is part of the Partnership on Artificial Intelligence, a consortium which has representation from some of most important companies and civil society organisations involved in developments and research on artificial intelligence. We have contributed to the The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and are also a part of a Big Data for Development Global Network, where we are undertaking research towards evolving ethical principles for use of computational techniques. The following are a set of recommendations we have arrived out of our research into artificial intelligence, particularly the sectoral case studies focussed on the development and use of artificial intelligence in India.&lt;/p&gt;
&lt;h1 style="text-align: justify; "&gt;National AI Strategies: A Brief Global Overview&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;Artificial Intelligence is emerging as  a central policy issue  in several countries. In October 2016, the Obama White House released a report titled, “Preparing for the Future of Artificial Intelligence”&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; delving into a range of issues including application for public goods, regulation, economic impact, global security and fairness issues. The White House also released a companion document called the “National Artificial Intelligence Research and Development Strategic Plan”&lt;a href="#_ftn3" name="_ftnref3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; which laid out a strategic plan for Federally-funded research and development in AI. These were the first of a series of policy documents released by the US towards the role of AI. The United Kingdom announced its 2020 national development strategy and issued a government report to accelerate the application of AI by government agencies while in 2018 the Department for Business, Energy, and Industrial Strategy released the Policy Paper - AI Sector Deal.&lt;a href="#_ftn4" name="_ftnref4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The Japanese government released it paper on Artificial Intelligence Technology Strategy in 2017.&lt;a href="#_ftn5" name="_ftnref5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The European Union launched "SPARC," the world’s largest civilian robotics R&amp;amp;D program, back in 2014.&lt;a href="#_ftn6" name="_ftnref6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Over the last year and a half, Canada,&lt;a href="#_ftn7" name="_ftnref7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; China,&lt;a href="#_ftn8" name="_ftnref8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; the UAE,&lt;a href="#_ftn9" name="_ftnref9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Singapore,&lt;a href="#_ftn10" name="_ftnref10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; South Korea&lt;a href="#_ftn11" name="_ftnref11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;, and France&lt;a href="#_ftn12" name="_ftnref12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; have announced national AI strategy documents while 24 member States in the EU have committed to develop national AI policies that reflect a “European” approach to AI &lt;a href="#_ftn13" name="_ftnref13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;. Other countries such as Mexico and Malaysia are in the process of evolving their national AI strategies. What this suggests is that AI is quickly emerging as central to national plans around the development of science and technology as well as economic and national security and development. There is also a focus on investments enabling AI innovation in critical national domains as a means of addressing key challenges facing nations. India has followed this trend and in 2018 the government published two AI roadmaps - the Report of Task Force on Artificial Intelligence by the AI Task Force constituted by the Ministry of Commerce and Industry&lt;a href="#_ftn14" name="_ftnref14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and the National Strategy for Artificial Intelligence by Niti Aayog.&lt;a href="#_ftn15" name="_ftnref15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Some of the key themes running across the National AI strategies globally are spelt out below.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Economic Impact of AI&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;A common thread that runs across the different national approaches to AI is the belief in the significant economic impact of AI, that it will likely increase productivity and create wealth. The British government estimated that AI could add $814 billion to the UK economy by 2035. The UAE report states that by 2031, AI will help boost the country’s GDP by 35 per cent, reduce government costs by 50 per cent. Similarly, China estimates that the core AI market will be worth 150 billion RMB ($25bn) by 2020, 400 billion RMB ($65bn) and one trillion RMB ($160bn) by 2030. The impact of adoption of AI and automation of labour and employment is also a key theme touched upon across the strategies. For instance, the White House Report of October 2016 states the US workforce is unprepared – and that a serious education programme, through online courses and in-house schemes, will be required.&lt;a href="#_ftn16" name="_ftnref16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;State Funding&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Another key trend exhibited in all national strategies towards AI has been a commitment by the respective governments towards supporting research and development in AI. The French government has stated that it intends to invest €1.5 billion ($1.85 billion) in AI research in the period through to 2022. The British government’s recommendations, in late 2017, were followed swiftly by a promise in the autumn budget of new funds, including at least £75 million for AI. Similarly, the the Canadian government put together a $125-million ‘pan-Canadian AI strategy’ last year.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;AI for Public Good&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The use of AI for Public Good is a significant focus of most AI policies. The biggest justification for AI innovation as a legitimate objective of public policy is its promised impact towards improvement of  people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies, and emerge as a transformative technology, much like mobile computing. These public good uses of AI are emerging across sectors such as transportation, migration, law enforcement and justice system, education, and agriculture..&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;National Institutions leading AI research&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Another important trend which was  key to the implementation of national AI strategies is the creation or development of well-funded centres of excellence which would serve as drivers of research and development and leverage synergies with the private sector. The French Institute for Research in Computer Science and Automation (INRIA) plans to create a national AI research program with five industrial partners. In UK, The Alan Turing Institute is likely to emerge as the national institute for data science, and an AI Council would be set up to manage inter-sector initiatives and training. In Canada, Canadian Institute for Advanced Research (CIFAR) has been tasked with implementing their AI strategy. Countries like Japan has a less centralised structure with the creation of strategic council for AI technology’ to promote research and development in the field, and manage a number of key academic institutions, including NEDO and its national ICT (NICT) and science and tech (JST) agencies. These institutions are key to successful implementation of national agendas and policies around AI.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;AI, Ethics and Regulation&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Across the AI strategies — ethical dimensions and regulation of AI were highlighted as concerns that needed to be addressed. Algorithmic transparency and explainability, clarity on liability, accountability and oversight, bias and discrimination, and privacy are ethical  and regulatory questions that have been raised. Employment and the future of work is another area of focus that has been identified by countries.  For example, the US 2016 Report reflected on if existing regulation is adequate to address risk or if adaption is needed by examining the use of AI in automated vehicles. In the policy paper - AI Sector Deal - the UK proposes four grand challenges: AI and Data Economy, Future Mobility, Clean Growth, and Ageing Society. The Pan Canadian Artificial Intelligence Strategy focuses on developing global thought leadership on the economic, ethical, policy, and legal implications of advances in artificial intelligence.&lt;a href="#_ftn17" name="_ftnref17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The above are important factors and trends to take into account and to different extents have been reflected in the two national roadmaps for AI. Without adequate institutional planning, there is a risk of national strategies being too monolithic in nature.  Without sufficient supporting mechanisms in the form of national institutions which would drive the AI research and innovation, capacity building and re-skilling of workforce to adapt to changing technological trends, building regulatory capacity to address new and emerging issues which may disrupt traditional forms of regulation and finally, creation of an environment of monetary support both from the public and private sector it becomes difficult to implement a national strategy and actualize the potentials of AI . As stated above, there is also a need for identification of key national policy problems which can be addressed by the use of AI, and the creation of a framework with institutional actors to articulate the appropriate plan of action to address the problems using AI. There are several ongoing global initiatives which are in the process of trying to articulate key principles for ethical AI. These discussions also feature in some of the national strategy documents.&lt;/p&gt;
&lt;h1 style="text-align: justify; "&gt;Key considerations for AI policymaking in India&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;As mentioned above, India has published two national AI strategies. We have responded to both of these here&lt;a href="#_ftn18" name="_ftnref18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and here.&lt;a href="#_ftn19" name="_ftnref19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Beyond these two roadmaps, this policy brief reflects on a number of factors that need to come together for India to leverage and adopt AI across sectors, communities, and technologies successfully.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Resources, Infrastructure, Markets, and Funding&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Ensure adequate government funding and investment in R&amp;amp;D&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;As mentioned above, a survey of all major national strategies on AI reveals a significant financial commitment from governments towards research and development surrounding AI. Most strategy documents speak of the need to safeguard national ambitions in the race for AI development. In order to do so it is imperative to have a national strategy for AI research and development, identification of nodal agencies to enable the process, and creation of institutional capacity to carry out cutting edge research.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most jurisdictions such as Japan, UK and China have discussed collaborations between the industry and government to ensure greater investment into AI research and development. The European Union has spoken using the existing public-private partnerships, particularly in robotics and big data to boost investment by over one and half times.&lt;a href="#_ftn20" name="_ftnref20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; To some extent, this  step has been initiated by the Niti Aayog strategy paper. The paper lists out enabling factors for the widespread adoption of AI and maps out specific government agencies and ministries that could promote such growth. In February 2018, the Ministry of Electronics and IT also set up four committees to prepare a roadmap for a national AI programme. The four committees are presently studying AI in context of citizen centric services; data platforms; skilling, reskilling and R&amp;amp;D; and legal, regulatory and cybersecurity perspectives.&lt;a href="#_ftn21" name="_ftnref21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Democratize AI technologies and data&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Clean, accurate, and appropriately curated data is essential for training algorithms. Importantly, large quantities of data alone does not translate into better results. Accuracy and curation of data should be prerequisites to quantity of data. Frameworks to generate and access larger quantity of data should not hinge on models of centralized data stores. The government and the private sector are generally gatekeepers to vast amounts of data and technologies. Ryan Calo has called this an issue of data parity,&lt;a href="#_ftn22" name="_ftnref22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; where only a few well established leaders in the field have the ability to acquire data and build datasets. Gaining access to data comes with its own questions of ownership, privacy, security, accuracy, and completeness. There are a number of different approaches and techniques that can be adopted to enable access to data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Open Government Data &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Robust open data sets is one way in which access can be enabled. Open data is particularly important for small start-ups as they build prototypes. Even though India is a data dense country and has in place a National Data and Accessibility Policy India does not yet have robust and comprehensive open data sets across sectors and fields.  Our research found that this is standing as an obstacle to innovation in the Indian context as startups often turn to open datasets in the US and Europe for developing prototypes. Yet, this is problematic because the demography represented in the data set is significantly different resulting in the development of solutions that are trained to a specific demographic, and thus need to be re-trained on Indian data. Although AI is technology agnostic, in the cases of different use cases of data analysis, demographically different training data is not ideal. This is particularly true for certain categories such as health, employment, and financial data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government can play a key role in providing access to datasets that will help the functioning and performance of AI technologies. The Indian government has already made a move towards accessible datasets through the Open Government Data Platform which provides access to a range of data collected by various ministries. Telangana has developed its own Open Data Policy which has stood out for its transparency and the quality of data collected and helps build AI based solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In order to encourage and facilitate innovation, the central and state governments need to actively pursue and implement the National Data and Accessibility Policy.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Access to Private Sector Data &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The private sector is the gatekeeper to large amounts of data. There is a need to explore different models of enabling access to private sector data while ensuring and protecting users rights and company IP. This data is often considered as a company asset and not shared with other stakeholders. Yet, this data is essential in enabling innovation in AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amanda Levendowski states that ML practitioners have essentially three options in securing sufficient data— build the databases themselves, buy the data, or use data in the public domain. The first two alternatives are largely available to big firms or institutions. Smaller firms often end resorting to the third option but it carries greater risks of bias.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A solution could be federated access, with companies allowing access to researchers and developers to encrypted data without sharing the actual data.  Another solution that has been proposed is ‘watermarking’ data sets.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sandboxes have been promoted as tools for enabling innovation while protecting privacy, security etc. Data sandboxes allow companies access to large anonymized data sets under controlled circumstances. A regulatory sandbox is a controlled environment with relaxed regulations that allow the product to be tested thoroughly before it is launched to the public. By providing certification and safe spaces for testing, the government will encourage innovation in this sphere. This system has already been adopted in Japan where there are AI specific regulatory sandboxes to drive society 5.0.160 data sandboxes are tools that can be considered within specific sectors to enable innovation. A sector wide data sandbox was also contemplated by TRAI.&lt;a href="#_ftn23" name="_ftnref23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; A sector specific governance structure can establish a system of ethical reviews of underlying data used to feed the AI technology along with data collected in order to ensure that this data is complete, accurate and has integrity. A similar system has been developed by Statistics Norway and the Norwegian Centre for Research Data.&lt;a href="#_ftn24" name="_ftnref24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Marketplaces&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The National Roadmap for Artificial Intelligence by NITI Aayog proposes the creation of a National AI marketplace that is comprised of a data marketplace, data annotation marketplace, and deployable model marketplace/solutions marketplace.&lt;a href="#_ftn25" name="_ftnref25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; In particular, it is envisioned that the data marketplace would be based on blockchain technology and have the features of: traceability, access controls, compliance with local and international regulations, and robust price discovery mechanism for data. Other questions that will need to be answered center around pricing and ensuring equal access. It will also be interesting how the government incentivises the provision of data by private sector companies. Most data marketplaces that are emerging are initiated by the private sector.&lt;a href="#_ftn26" name="_ftnref26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; A government initiated marketplace has the potential to bring parity to some of the questions raised above, but it should be strictly limited to private sector data in order to not replace open government data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Open Source Technology &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;A number of companies are now offering open source AI technologies. For example, TensorFlow, Keras, Scikit-learn, Microsoft Cognitive Toolkit, Theano, Caffe, Torch, and Accord.NET.&lt;a href="#_ftn27" name="_ftnref27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The government should incentivise and promote open source AI technologies towards harnessing and accelerating research in AI.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Re-thinking Intellectual Property Regimes &lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Going forward it will be important for the government to develop an intellectual property framework that encourages innovation. AI systems are trained by reading, viewing, and listening to copies of human-created works. These resources such as books, articles, photographs, films, videos, and audio recordings are all key subjects of copyright protection. Copyright law grants exclusive rights to copyright owners, including the right to reproduce their works in copies, and one who violates one of those exclusive rights “is an infringer of copyright.&lt;a href="#_ftn28" name="_ftnref28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The enterprise of AI is, to this extent, designed to conflict with tenets of copyright law, and after the attempted ‘democratization’ of copyrighted content by the advent of the Internet, AI poses the latest challenge to copyright law. At the centre of this challenge is the fact that it remains an open question whether a copy made to train AI is a “copy” under copyright law, and consequently whether such a copy is an infringement.&lt;a href="#_ftn29" name="_ftnref29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The fractured jurisprudence on copyright law is likely to pose interesting legal questions with newer use cases of AI. For instance, Google has developed a technique called federated learning, popularly referred to as on-device ML, in which training data is localised to the originating mobile device rather than copying data to a centralized server.&lt;a href="#_ftn30" name="_ftnref30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The key copyright questions here is whether decentralized training data stored in random access memory (RAM) would be considered as “copies”.&lt;a href="#_ftn31" name="_ftnref31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; There are also suggestions that copies made for the purpose of training of machine learning systems may be so trivial or de minimis that they may not qualify as infringement.&lt;a href="#_ftn32" name="_ftnref32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; For any industry to flourish, there needs to be legal and regulatory clarity and it is imperative that these copyright questions emerging out of use of AI be addressed soon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As noted in our response to the Niti Aayog national AI strategy  “&lt;i&gt;The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.”&lt;a href="#_ftn33" name="_ftnref33"&gt;&lt;sup&gt;&lt;b&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/b&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/i&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;National infrastructure to support domestic development &lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Building a robust national Artificial Intelligence solution requires establishing adequate indigenous  infrastructural capacity for data storage and processing.  While this should not necessarily extend to mandating data localisation as the draft privacy bill has done, capacity should be developed to store data sets generated by indigenous nodal points.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Data Storage &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Capacity needs to increase as the volume of data that needs to be processed in India increases. This includes ensuring effective storage capacity, IOPS (Input/Output per second) and ability to process massive amounts of data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Networking Infrastructure&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Organizations will need to upgrade their networks in a bid to upgrade and optimize efficiencies of scale. Scalability must be undertaken on a high priority which will require a high-bandwidth, low latency and creative architecture, which requires appropriate last mile data curation enforcement.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Conceptualization and Implementation&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Awareness, Education, and Reskilling &lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Encouraging AI research&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;This can be achieved by collaborations between the government and large companies to promote accessibility and encourage innovation through greater R&amp;amp;D spending. The Government of Karnataka, for instance, is collaborating with NASSCOM to set up a Centre of Excellence for Data Science and Artificial Intelligence (CoE-DS&amp;amp;AI) on a public-private partnership model to “accelerate the ecosystem in Karnataka by providing the impetus for the development of data science and artificial intelligence across the country.” Similar centres could be incubated in hospitals and medical colleges in India.  Principles of public funded research such as FOSS, open standards, and open data should be core to government initiatives to encourage research.  The Niti Aaayog report proposes a two tier integrated approach towards accelerating research, but is currently silent on these principles.&lt;a href="#_ftn34" name="_ftnref34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore,as suggested by the NITI AAYOG Report, the government needs to set up ‘centres of excellence’. Building upon the stakeholders identified in the NITI AAYOG Report, the centers of excellence should  involve a wide range of experts including lawyers, political philosophers, software developers, sociologists and gender studies from diverse organizations including government, civil society,the private sector and research institutions  to ensure the fair and efficient roll out of the technology.&lt;a href="#_ftn35" name="_ftnref35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; An example is the Leverhulme Centre for the Future of Intelligence set up by the Leverhulme Foundation at the University of Cambridge&lt;a href="#_ftn36" name="_ftnref36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and the AI Now Institute at New York University (NYU)&lt;a href="#_ftn37" name="_ftnref37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; These research centres bring together a wide range of experts from all over the globe.&lt;a href="#_ftn38" name="_ftnref38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Skill sets to successfully adopt AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Educational institutions should provide opportunities for students to skill themselves to adapt to adoption of AI, and also push for academic programmes around AI. It is also important to introduce computing technologies such as AI in medical schools in order to equip doctors to adopt the technical skill sets and ethics required to use integrate AI in their practices. Similarly, IT institutes could include courses on ethics, privacy, accountability etc. to equip engineers and developers with an understanding of the questions surrounding the technology and services they are developing.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Societal Awareness Building&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Much of the discussion around skilling for AI is in the context of the workplace, but there is a need for awareness to be developed across society for a broader adaptation to AI. The Niti Aayog report takes the first steps towards this - noting the importance of highlighting the benefits of AI to the public. The conversation needs to go beyond this towards enabling individuals to recognize and adapt to changes that might be brought about - directly and indirectly - by AI - inside and outside of the workplace. This could include catalyzing a shift in mindset to life long learning and discussion around potential implications of human-machine interactions.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Early Childhood Awareness and Education &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;It is important that awareness around AI begins in early childhood. This is  in part because children already interact with AI and increasingly will do so and thus awareness is needed in how AI works and can be safely and ethically used. It is also important to start building the skills that will be necessary in an AI driven society from a young age.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Focus on marginalised groups &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Awareness, skills, and education should be targeted at national minorities including rural communities, the disabled, and women. Further, there should be a concerted  focus on communities that are under-represented in the tech sector-such as women and sexual minorities-to ensure that the algorithms themselves and the community working on AI driven solutions are holistic and cohesive. For example, Iridescent focuses on girls, children, and families to enable them to adapt to changes like artificial intelligence through promoting curiosity, creativity, and perseverance to become lifelong learners.&lt;a href="#_ftn39" name="_ftnref39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This will be important towards ensuring that AI does not deepen societal  and global inequalities including digital divides. Widespread use of AI will undoubtedly require re-skilling various stakeholders in order to make them aware of the prospects of AI.&lt;a href="#_ftn40" name="_ftnref40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Artificial Intelligence itself can be used as a resource in the re-skilling process itself-as it would be used in the education sector to gauge people’s comfort with the technology and plug necessary gaps.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Improved access to and awareness of Internet of Things&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The development of smart content or Intelligent Tutoring Systems in the education can only be done on a large scale if both the teacher and the student has access to and feel comfortable with using basic IoT devices . A U.K. government report has suggested that any skilled workforce  using AI should be a mix of those with a basic understanding responsible for implementation at the grassroots level , more informed users and specialists with advanced development and implementation skills.&lt;a href="#_ftn41" name="_ftnref41"&gt;&lt;sup&gt;&lt;sup&gt;[41]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;The same logic applies to the agriculture sector, where the government is looking to develop smart weather-pattern tracking applications. A potential short-term solution may lie in ensuring that key actors have access to an  IoT device so that he/she may access digital and then impart the benefits of access to proximate individuals. In the education sector, this would involve ensuring that all teachers have access to and are competent in using an IoT device. In the agricultural sector, this may involve equipping each village with a set of IoT devices so that the information can be shared among concerned individuals. Such an approach recognizes that AI is not the only technology catalyzing change - for example industry 4.0 is understood as  comprising of a suite of technologies including but not limited to AI.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Public Discourse&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;As solutions bring together and process vast amounts of granular data, this data can be from a variety of public and private sources - from third party sources or generated by the AI and its interaction with its environment. This means that very granular and non traditional data points are now going into decision making processes. Public discussion is needed to understand social and cultural norms and standards and how these might translate into acceptable use norms for data in various sectors.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Coordination and collaboration across stakeholders &lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Development of Contextually Nuanced and Appropriate AI Solutions &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Towards ensuring effectiveness and  accuracy it is important that solutions used in India are developed to account for cultural nuances and diversity. From our research this could be done in a number of ways ranging from: training AI solutions used in health on data from Indian patients to account for differences in demographics&lt;a href="#_ftn42" name="_ftnref42"&gt;&lt;sup&gt;&lt;sup&gt;[42]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;,  focussing on  natural language voice recognition to account for the diversity in languages and digital skills in the Indian context,&lt;a href="#_ftn43" name="_ftnref43"&gt;&lt;sup&gt;&lt;sup&gt;[43]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and developing and applying AI to reflect societal norms and understandings.&lt;a href="#_ftn44" name="_ftnref44"&gt;&lt;sup&gt;&lt;sup&gt;[44]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Continuing, deepening, and expanding  partnerships for innovation&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Continued innovation while holistically accounting for the challenges that AI poses  will be key for actors in the different sectors to remain competitive. As noted across case study reports partnerships is key in  facilitating this innovation and filling capacity gaps. These partnerships can be across sectors, institutions, domains, geographies, and stakeholder groups. For example:  finance/ telecom, public/private, national/international, ethics/software development/law, and academia/civil society/industry/government.  We would emphasize collaboration between actors across different domains and stakeholder groups as developing holistics AI solutions demands multiple understandings and perspectives.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Coordinated Implementation&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Key sectors in India need to  begin to take steps to consider sector wide coordination in implementing AI. Potential stress and system wide vulnerabilities would need to be considered when undertaking this. Sectoral regulators such as RBI, TRAI, and the Medical Council of India are ideally placed to lead this coordination.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Develop contextual standard benchmarks to assess quality of algorithms&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;In part because of the nacency of the development and implementation of AI,  towards enabling effective assessments of algorithms to understand impact and informing selection by institutions adopting solutions, standard benchmarks can help in assessing quality and appropriateness of algorithms. It may be most effective to define such benchmarks at a sectoral level (finance etc.) or by technology and solution (facial recognition etc.).  Ideally, these efforts would be led by the government in collaboration with multiple stakeholders.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Developing a framework for working with the private sector for use-cases by the government&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;There are various potential use cases the government could adopt in order to use AI as a tool for augmenting public service delivery  in India by the government. However, given lack of capacity -both human resource and technological-means that entering into partnerships with the private sector may enable more fruitful harnessing of AI- as has been seen with existing MOUs in the agricultural&lt;a href="#_ftn45" name="_ftnref45"&gt;&lt;sup&gt;&lt;sup&gt;[45]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and healthcare sectors.&lt;a href="#_ftn46" name="_ftnref46"&gt;&lt;sup&gt;&lt;sup&gt;[46]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, the partnership must be used as a means to build capacity within the various nodes in the set-up rather than relying  only on  the private sector partner to continue delivering sustainable solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Particularly, in the case of use of AI for governance, there is a need to evolve a clear parameter to do impact assessment prior to the deployment of the technology that clearly tries to map estimated impact of the technology of clearly defined objectives, which must also include the due process, procedural fairness and human rights considerations . As per Article 12 of the Indian Constitution, whenever the government is exercising a public function, it is bound by the entire gamut of fundamental rights articulated in Part III of the Constitution. This is a crucial consideration the government will have to bear in mind whenever it uses AI-regardless of the sector.  In all cases of public service delivery, primary accountability for the use of AI should lie with the government itself, which means that a cohesive and uniform framework which regulates these partnerships must be conceptualised. This framework should incorporate : (a) Uniformity in the wording and content of contracts that the government signs, (b) Imposition of obligations of transparency and accountability on the developer to ensure that the solutions developed are in conjunction with constitutional standards and (c) Continuous evaluation of private sector developers by the government and experts to ensure that they are complying with their obligations.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Defining Safety Critical AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The implications of AI differs according to use. Some countries, such as the EU, are beginning to define sectors where AI should play the role of augmenting jobs as opposed to functioning autonomously. The Global Partnership on AI is has termed sectors where AI tools supplement or replace human decision making in areas such as health and transportation as ‘safety critical AI’ and is  researching best practices for application of AI in these areas.  India will need to think through if there is a threshold that needs to be set and more stringent regulation applied. In addition to uses in health and transportation, defense and law enforcement would be another sector where certain use would require more stringent regulation.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Appropriate certification mechanisms&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Appropriate certificate mechanisms will be important in ensuring the quality of AI solutions.   A significant barrier to the adoption of AI  in some sectors  in India is acceptability of results, which include direct results arrived at using AI technologies as well as opinions provided by practitioners that are influenced/aided by AI technologies. For instance, start-ups in the healthcare sectors often find that they are asked to show proof of a clinical trial when presenting their products to doctors and hospitals, yet clinical trials are expensive, time consuming and inappropriate forms of certification for medical devices and digital health platforms. Startups also face difficulty in conducting clinical trials as there is lack of a clear regulation to adhere to. They believe that while clinical trials are a necessity with respect to drugs, the process often results in obsolescence of the technology by the time it is approved in the context of AI. Yet, medical practitioners are less trusting towards startups who do not have approval from a national or international authority. A possible and partial solution suggested by these startups is to enable doctors to partner with them to conduct clinical trials together. However, such partnerships cannot be at the expense of rigour, and adequate protections need to be built in the enabling regulation.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Serving as a voice for emerging economies in the global debate on AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;While India should utilise Artificial Intelligence in the economy as a means of occupying a driving role in the global debate around AI, it must be cautious before allowing the use of Indian territory and infrastructure as a test bed for other emerging economies without considering the ramifications that the utilisation of AI may have for Indian citizens. The NITI AAYOG Report envisions  India as leverage AI as a ‘garage’ for emerging economies.&lt;a href="#_ftn47" name="_ftnref47"&gt;&lt;sup&gt;&lt;sup&gt;[47]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While there are certain positive connotations of this suggestion in so far as this propels India to occupy a leadership position-both technically and normatively in determining future use cases for AI in India,, in order to ensure that Indian citizens are not used as test subjects in this process, guiding principles could be developed such as requiring that projects have clear benefits for India.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Frameworks for Regulation&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;National legislation&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Data Protection Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;India is a data-dense country, and the lack of a robust privacy  regime, allows the public and private sector easier access to large amounts of data than might be found in other contexts with stringent privacy laws. India also lacks a formal regulatory regime around anonymization. In our research we found that this gap does not always translate into a gap in practice, as some start up companies have  adopted  self-regulatory practices towards protecting privacy such as of anonymising data they receive before using it further, but it does result in unclear and unharmonized practice..&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In order to ensure rights and address emerging challenges to the same posed by artificial intelligence, India needs to enact   a comprehensive privacy legislation applicable to the private and public sector to regulate the use of data, including use in artificial intelligence. A privacy legislation will also have to address more complicated questions such as the use of publicly available data for training algorithms, how traditional data categories (PI vs. SPDI - meta data vs. content data etc.) need to be revisited in light of AI,  and how can a privacy legislation be applied to autonomous decision making. Similarly, surveillance laws may need to be revisited in light of AI driven technologies such as facial recognition, UAS, and self driving cars as they provide new means of surveillance to the state and have potential implications for other rights such as the right to freedom of expression and the right to assembly.  Sectoral protections can compliment and build upon the baseline protections articulated in a national privacy legislation.&lt;a href="#_ftn48" name="_ftnref48"&gt;&lt;sup&gt;&lt;sup&gt;[48]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; In August 2018 the Srikrishna Committee released a draft data protection bill for India. We have reflected on how the Bill addresses AI. Though the Bill brings under its scope companies deploying emerging technologies and subjects them to the principles of privacy by design and data impact assessments, the Bill is silent on key rights and responsibilities, namely the responsibility of the data controller to explain the logic and impact of automated decision making including profiling to data subjects and the right to opt out of automated decision making in defined circumstances.&lt;a href="#_ftn49" name="_ftnref49"&gt;&lt;sup&gt;&lt;sup&gt;[49]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Further, the development of technological solutions to address the dilemma between AI and the need for access to larger quantities of data for multiple purposes and privacy should be emphasized.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Discrimination Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;A growing area of research globally is the social consequences of AI with a particular focus on its tendency to replicate or amplify existing and structural inequalities. Problems such as data invisibility of certain excluded groups,&lt;a href="#_ftn50" name="_ftnref50"&gt;&lt;sup&gt;&lt;sup&gt;[50]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; the myth of data objectivity and neutrality,&lt;a href="#_ftn51" name="_ftnref51"&gt;&lt;sup&gt;&lt;sup&gt;[51]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and data monopolization&lt;a href="#_ftn52" name="_ftnref52"&gt;&lt;sup&gt;&lt;sup&gt;[52]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; contribute to the disparate impacts of big data and AI. So far much of the research on this subject has not moved beyond the exploratory phase as is reflected in the reports released by the White House&lt;a href="#_ftn53" name="_ftnref53"&gt;&lt;sup&gt;&lt;sup&gt;[53]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and Federal Trade Commission&lt;a href="#_ftn54" name="_ftnref54"&gt;&lt;sup&gt;&lt;sup&gt;[54]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; in the United States. The biggest challenge in addressing discriminatory and disparate impacts of AI is ascertaining “where value-added personalization and segmentation ends and where harmful discrimination begins.”&lt;a href="#_ftn55" name="_ftnref55"&gt;&lt;sup&gt;&lt;sup&gt;[55]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some prominent cases where AI can have discriminatory impact are denial of loans based on attributes such as neighbourhood of residence as a proxies which can be used to circumvent anti-discrimination laws which prevent adverse determination on the grounds of race, religion, caste or gender, or adverse findings by predictive policing against persons who are unfavorably represented in the structurally biased datasets used by the law enforcement agencies. There is a dire need for disparate impact regulation in sectors which see the emerging use of AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Similar to disparate impact regulation, developments in AI, and its utilisation, especially in credit rating, or risk assessment processes could create complex problems that cannot be solved only by the principle based regulation. Instead, regulation intended specifically to avoid outcomes that the regulators feel are completely against the consumer, could be an additional tool that increases the fairness, and effectiveness of the system.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Competition Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The conversation of use of competition or antitrust laws to govern AI is still at an early stage. However, the emergence of numerous data driven mergers or acquisitions such as Yahoo-Verizon, Microsoft-LinkedIn and Facebook-WhatsApp have made it difficult to ignore the potential role of competition law in the governance of data collection and processing practices. It is important to note that the impact of Big Data goes far beyond digital markets and the mergers of companies such as Bayer, Climate Corp and Monsanto shows that data driven business models can also lead to the convergence of companies from completely different sectors as well. So far, courts in Europe have looked at questions such as the impact of combination of databases on competition&lt;a href="#_ftn56" name="_ftnref56"&gt;&lt;sup&gt;&lt;sup&gt;[56]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and have held that in the context of merger control, data can be a relevant question if an undertaking achieves a dominant position through a merger, making it capable of gaining further market power through increased amounts of customer data. The evaluation of the market advantages of specific datasets has already been done in the past, and factors which have been deemed to be relevant have included whether the dataset could be replicated under reasonable conditions by competitors and whether the use of the dataset was likely to result in a significant competitive advantage.&lt;a href="#_ftn57" name="_ftnref57"&gt;&lt;sup&gt;&lt;sup&gt;[57]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, there are limited circumstances in which big data meets the four traditional criteria for being a barrier to entry or a source of sustainable competitive advantage — inimitability, rarity, value, and non-substitutability.&lt;a href="#_ftn58" name="_ftnref58"&gt;&lt;sup&gt;&lt;sup&gt;[58]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Any use of competition law to curb data-exclusionary or data-exploitative practices will first have to meet the threshold of establishing capacity for a firm to derive market power from its ability to sustain datasets unavailable to its competitors. In this context the peculiar ways in which network effects, multi-homing practices and how dynamic the digital markets are, are all relevant factors which could have both positive and negative impacts on competition. There is a need for greater discussion on data as a sources of market power in both digital and non-digital markets, and how this legal position can used to curb data monopolies, especially in light of government backed monopolies for identity verification and payments in India.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Consumer Protection Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session has introduced an expansive definition of the term “unfair trade practices.” The definition as per the Bill includes the disclosure “to any other person any personal information given in confidence by the consumer.” This clause excludes from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Alongside, there is also a need to ensure that principles such as safeguarding consumers personal information in order to ensure that the same is not used to their detriment are included within the definition of unfair trade practices. This would provide consumers an efficient and relatively speedy forum to contest adverse impacts on them of data driven decision-making.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Sectoral Regulation &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Our research into sectoral case studies revealed that there are a number of existing sectoral laws and policies that are applicable to aspects of AI. For example, in the health sector there is the Medical Council Professional Conduct, Etiquette, and Ethics Regulations 2002, the Electronic Health Records Standards 2016, the draft Medical Devices Rules 2017, the draft Digital Information Security in Healthcare Act.  In the finance sector there is the Credit Information Companies (Regulation) Act 2005 and 2006, the Securities and Exchange Board of India (Investment Advisers) Regulations, 2013, the Payment and Settlement Systems Act, 2007, the Banking Regulations Act 1949, SEBI guidelines on robo advisors etc. Before new regulations, guidelines etc are developed - a comprehensive exercise needs to be undertaken at a sectoral level to understand if 1. sectoral policy adequately addresses the changes being brought about by AI 2. If it does not - is an amendment possible and if not - what form of policy would fill the gap.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Principled approach&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Transparency&lt;/b&gt;&lt;/h4&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Audits&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Internal and external audits can be mechanisms towards creating transparency about the processes and results of AI solutions as they are implemented in a specific context. Audits can take place while a solution is still in ‘pilot’ mode and on a regular basis during implementation. For example,  in the Payment Card Industry (PCI) tool,  transparency is achieved through frequent audits, the results of which are simultaneously and instantly transmitted to the regulator and the developer. Ideally parts of the results of the audit are also made available to the public, even if the entire results are not shared.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Tiered Levels of Transparency&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;There are different levels and forms of transparency as well as different ways of achieving the same. The type and form of transparency can be tiered and dependent on factors such as criticality of function, potential direct and indirect harm, sensitivity of data involved, actor using the solution . The audience can also be tiered and could range from an individual user to senior level positions, to oversight bodies.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Human Facing Transparency&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;It will be important for India to define standards around human-machine interaction including the level of transparency that will be required. Will chatbots need to disclose that they are chatbots? Will a notice need to be posted that facial recognition technology is used in a CCTV camera? Will a company need to disclose in terms of service and privacy policies that data is processed via an AI driven solution? Will there be a distinction if the AI takes the decision autonomously vs. if the AI played an augmenting role? Presently, the Niti Aayog paper has been silent on this question.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Explainability&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;An explanation is not equivalent to complete  transparency. The obligation of providing an explanation does not mean  that the developer should necessarily  know the flow of bits through the AI system. Instead, the legal requirement of providing an explanation requires an ability to explain how certain parameters may be utilised to arrive at an outcome in a certain situation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Doshi-Velez and Kortz have highlighted two technical ideas that may enhance a developer's ability to explain the functioning of AI systems:&lt;a href="#_ftn59" name="_ftnref59"&gt;&lt;sup&gt;&lt;sup&gt;[59]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;1) Differentiation and processing: AI systems are designed to have the inputs differentiated and processed through various forms of computation-in a reproducible and robust manner. Therefore, developers should be able to explain a particular decision by examining the inputs in an attempt to determine which of them have the greatest impact on the outcome.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;2) Counterfactual faithfulness: The second property of counterfactual faithfulness enables the developer to consider which factors caused a difference in the outcomes. Both these solutions can be deployed without necessarily knowing the contents of black boxes. As per Pasquale, ‘Explainability matters because the process of reason-giving is intrinsic to juridical determinations – not simply one modular characteristic jettisoned as anachronistic once automated prediction is sufficiently advanced.”&lt;a href="#_ftn60" name="_ftnref60"&gt;&lt;sup&gt;&lt;sup&gt;[60]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Rules based system applied contextually&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Oswald et al have suggested two proposals that might  mitigate algorithmic opacity.by designing a broad rules-based system, whose implementation need to be applied in a context-specific manner which thoroughly evaluates the key enablers and challengers in each specific use case.&lt;a href="#_ftn61" name="_ftnref61"&gt;&lt;sup&gt;&lt;sup&gt;[61]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;Experimental proportionality was designed to enable the courts to make proportionality determinations of an algorithm at the experimental stage even before the impacts are fully realised in a manner that would enable them to ensure that appropriate metrics for performance evaluation and cohesive principles of design have been adopted. In such cases they recommend that the courts give the benefit of the doubt to the public sector body subject to another hearing within a stipulated period of time once data on the impacts of the algorithm become more readily available.&lt;/li&gt;
&lt;li&gt;‘ALGO-CARE' calls for the design of a rules-based system which ensures that the algorithms&lt;a href="#_ftn62" name="_ftnref62"&gt;&lt;sup&gt;&lt;sup&gt;[62]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; are:&lt;/li&gt;
&lt;/ul&gt;
&lt;p style="text-align: justify; "&gt;(1) Advisory: Algorithms must retain an advisory capacity that augments existing human capability rather than replacing human discretion outright;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(2) Lawful: Algorithm's proposed function, application, individual effect and use of datasets should be considered in  symbiosis with necessity, proportionality and data minimisation principles;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(3) Granularity: Issues such as data analysis issues such as meaning of data, challenges stemming from disparate tracts of data, omitted data and inferences  should be key points in the implementation process;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(4) Ownership: Due regard should be given to intellectual property ownership but in the case of algorithms used for governance, it may be better to have open source algorithms at the default.  Regardless of the sector,the developer must ensure that the algorithm works in a manner that enables a third party to investigate the workings of the algorithm in an adversarial judicial context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(5)Challengeable:The results of algorithmic analysis should be applied with regard to professional codes and regulations and be challengeable. In a report evaluating the NITI AAYOG  Discussion Paper, CIS has argued that AI that is used for governance , must be made auditable in the public domain,if not under Free and Open Source Software (FOSS)-particularly in the case of AI that has implications for fundamental rights.&lt;a href="#_ftn63" name="_ftnref63"&gt;&lt;sup&gt;&lt;sup&gt;[63]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(6) Accuracy: The design of the algorithm should check for accuracy;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(7) Responsible: Should consider a wider set of ethical and moral principles and the foundations of human rights as a guarantor of human dignity at all levels and&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(8) Explainable: Machine Learning should be interpretable and accountable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A rules based system like ALGO-CARE can enable predictability in use frameworks for AI. Predictability compliments and strengthens  transparency.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Accountability&lt;/b&gt;&lt;/h4&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Conduct Impact Assessment&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;There is a need to evolve Algorithmic Impact Assessment frameworks for the different sectors in India, which should address issues of bias, unfairness and other harmful impacts of use of automated decision making. AI is a nascent field and the impact of the technology on the economy, society, etc. is still yet to be fully understood. Impact assessment standards will be important in identifying and addressing potential or existing harms and could potentially be more important in sectors or uses where there is direct human interaction with AI or power dimensions - such as in healthcare or use by the government. A 2018 Report by the AI Now Institute lists methods that should be adopted by the government for conducting his holistic assessment&lt;a href="#_ftn64" name="_ftnref64"&gt;&lt;sup&gt;&lt;sup&gt;[64]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;: These should  include: (1) Self-assessment by the government department in charge of implementing the technology, (2)Development of meaningful inter-disciplinary external researcher review mechanisms, (3) Notice to the public regarding  self-assessment and external review, (4)Soliciting of public comments for clarification or concerns, (5) Special regard to vulnerable communities who may not be able to exercise their voice in public proceedings. An adequate review mechanism which holistically evaluates the impact of AI would ideally include all five of these components in conjunction with each other.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Regulation of Algorithms&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Experts have voiced concerns about AI mimicking human prejudices due to the biases present in the Machine Learning algorithms. Scientists have revealed through their research that machine learning algorithms can imbibe gender and racial prejudices which are ingrained in language patterns or data collection processes. Since AI and machine algorithms are data driven, they arrive at results and solutions based on available &lt;br /&gt; and historical data. When this data itself is biased, the solutions presented by the AI will also be biased. While this is inherently discriminatory, scientists have provided solutions to rectify these biases which can occur at various stages by introducing a counter bias at another stage. It has also been suggested that data samples should be shaped in such a manner so as to minimise the chances of algorithmic bias. Ideally regulation of algorithms could be tailored - explainability, traceability, scrutability. We recommend that the national strategy on AI policy must take these factors into account and combination of a central agency driving the agenda, and sectoral actors framing regulations around specific uses of AI that are problematic and implementation is required.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As the government begins to adopt AI into governance - the extent to which and the  circumstances autonomous decision making capabilities can be delegated to AI need to be questioned. Questions on whether AI should be autonomous, should always have a human in the loop, and should have a ‘kill-switch’ when used in such contexts also need to be answered. A framework or high level principles can help to guide these determinations. For example:&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;Modeling Human Behaviour: An AI solution trying to model human behaviour, as in the case of judicial decision-making or predictive policing may need to be more regulated, adhere to stricter standards, and need more oversight than an algorithm that is trying to predict ‘natural’ phenomenon such as traffic congestion or weather patterns.&lt;/li&gt;
&lt;li&gt;Human Impact: An AI solution which could cause greater harm if applied erroneously-such as a robot soldier that mistakenly targets a civilian requires a different level and framework of regulation  than an AI solution  designed to create a learning path for a student in the education sector and errs in making an appropriate assessment.. &lt;/li&gt;
&lt;li&gt;Primary User: AI solutions whose primary users are state agents attempting to discharge duties in the public interest such as policemen, should be approached with more caution than those used by individuals such as farmers getting weather alerts&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Fairness&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;It is possible to incorporate broad definitions of fairness into a wide range of data analysis and classification systems.&lt;a href="#_ftn65" name="_ftnref65"&gt;&lt;sup&gt;&lt;sup&gt;[65]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While there can be no bright-line rules that will necessarily enable the operator or designer of a Machine Learning System to arrive at an ex ante determination of fairness, from a public policy perspective, there must be a set of rules or best practices that explain how notions of fairness should be utilised in the real world applications of AI-driven solutions.&lt;a href="#_ftn66" name="_ftnref66"&gt;&lt;sup&gt;&lt;sup&gt;[66]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While broad parameters should be encoded by the developer to ensure compliance with constitutional standards, it is also crucial that the functioning of the algorithm allows for an ex-post determination of fairness by an independent oversight body if the impact of the AI driven solution is challenged.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Further, while there is no precedent on this anywhere in the world, India could consider establishing a Committee entrusted with the specific task of continuously evaluating the operation of AI-driven algorithms. Questions that the government would need to answer with regard to this body include:&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;What should the composition of the body be?&lt;/li&gt;
&lt;li&gt;What should be the procedural mechanisms that govern the operation of the body?&lt;/li&gt;
&lt;li&gt;When should the review committee step in? This is crucial because excessive review may re-entrench the bureaucracy that the AI driven solution was looking to eliminate.&lt;/li&gt;
&lt;li&gt;What information will be necessary for the review committee to carry out its determination? Will there be conflicts with IP, and if so how will these be resolved?&lt;/li&gt;
&lt;li&gt;To what degree will the findings of the committee be made public?&lt;/li&gt;
&lt;li&gt;What powers will the committee have? Beyond making determinations, how will these be enforced?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Market incentives&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Standards as a means to address data issues&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;With digitisation of legacy records and the ability to capture more granular data digitally, one of the biggest challenges facing Big Data is a lack of standardised data and interoperability frameworks. This is particularly true in the healthcare and medicine sector where medical records do not follow a clear standard, which poses a challenge to their datafication and analysis. The presence of developed standards in data management and exchange,  interoperable Distributed Application Platform and Services, Semantic related standards for markup, structure, query, semantics, Information access and exchange have been spoken of as essential to address the issues of lack of standards in Big Data.&lt;a href="#_ftn67" name="_ftnref67"&gt;&lt;sup&gt;&lt;sup&gt;[67]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Towards enabling usability of data, it is important that clear data standards are established. This has been recognized by Niti Aayog in its National Strategy for AI. On one hand, there can operational issues with allowing each organisation to choose their own specific standards to operate under, while on the other hand, non-uniform digitisation of data will also cause several practical problems, most primarily to do with interoperability of the individual services, as well as their usability. For instance, in the healthcare sector, though India has adopted an EHR policy, implementation of this policy is not yet harmonized - leading to different interpretations of ‘digitizing records (i.e taking snapshots of doctor notes), retention methods and periods, and comprehensive implementation across all hospital data. Similarly, while independent banks and other financial organisations are already following, or in the process of developing internal practices,there exist no uniform standards for digitisation of financial data. As AI development, and application becomes more mainstream in the financial sector, the lack of a fixed standard could create significant problems.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Better Design Principles in Data Collection&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;An enduring criticism of the existing notice and consent framework has been that long, verbose and unintelligible privacy notices are not efficient in informing individuals and helping them make rational choices. While this problem predates Big Data, it has only become more pronounced in recent times, given the ubiquity of data collection and implicit ways in which data is being collected and harvested. Further, constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things amplify the usability issues of the privacy notices. Some of the issues with privacy notices include Notice complexity, lack of real choices, notices decoupled from the system collecting data etc. An industry standard for a design approach to privacy notices which includes looking at factors such as the timing of the notice, the channels used for communicating the notices, the modality (written, audio, machine readable, visual) of the notice and whether the notice only provides information or also include choices within its framework, would be of great help.  Further, use of privacy by design principles can be done not just at the level of privacy notices but at each step of the information flow, and the architecture of the system can be geared towards more privacy enhanced choices.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf"&gt;https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf"&gt;https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.nedo.go.jp/content/100865202.pdf"&gt;http://www.nedo.go.jp/content/100865202.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.eu-robotics.net/sparc/10-success-stories/european-robotics-creating-new-markets.html?changelang=2&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy"&gt;https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/"&gt;https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.uaeai.ae/en/"&gt;http://www.uaeai.ae/en/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref10" name="_ftn10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.aisingapore.org/"&gt;https://www.aisingapore.org/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref11" name="_ftn11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://news.joins.com/article/22625271"&gt;https://news.joins.com/article/22625271&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref12" name="_ftn12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf"&gt;https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref13" name="_ftn13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe"&gt;https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe&lt;/a&gt; &lt;a href="https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/"&gt;https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref14" name="_ftn14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.aitf.org.in/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref15" name="_ftn15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref16" name="_ftn16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref17" name="_ftn17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref18" name="_ftn18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref19" name="_ftn19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref20" name="_ftn20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe"&gt;https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref21" name="_ftn21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://pib.nic.in/newsite/PrintRelease.aspx?relid=181007&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref22" name="_ftn22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ryan Calo, 2017 Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis L. Review,&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Vol. 51, pp. 398 - 435.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref23" name="_ftn23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://trai.gov.in/sites/default/files/CIS_07_11_2017.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref24" name="_ftn24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref25" name="_ftn25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref26" name="_ftn26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://martechtoday.com/bottos-launches-a-marketplace-for-data-to-train-ai-models-214265&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref27" name="_ftn27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref28" name="_ftn28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Implicit Bias Problem, 93 WASH. L. REV. (forthcoming 2018) (manuscript at 23, 27-32),&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938"&gt;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref29" name="_ftn29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref30" name="_ftn30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; H. Brendan McMahan, et al., Communication-Efficient Learning of Deep Networks&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;from Decentralized Data, arXiv:1602.05629 (Feb. 17, 2016), &lt;a href="https://arxiv.org/abs/1602.05629"&gt;https://arxiv.org/abs/1602.05629&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref31" name="_ftn31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref32" name="_ftn32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Pierre N. Leval, Nimmer Lecture: Fair Use Rescued, 44 UCLA L. REV. 1449, 1457 (1997).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref33" name="_ftn33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref34" name="_ftn34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref35" name="_ftn35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Discussion Paper on National Strategy for Artificial Intelligence | NITI Aayog | National Institution for Transforming India. (n.d.) p. 54. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref36" name="_ftn36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Leverhulme Centre for the Future of Intelligence, http://lcfi.ac.uk/.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref37" name="_ftn37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; AI Now, https://ainowinstitute.org/.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref38" name="_ftn38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref39" name="_ftn39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://iridescentlearning.org/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref40" name="_ftn40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref41" name="_ftn41"&gt;&lt;sup&gt;&lt;sup&gt;[41]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Points, L., &amp;amp; Potton, E. (2017). Artificial intelligence and automation in the UK.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref42" name="_ftn42"&gt;&lt;sup&gt;&lt;sup&gt;[42]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/files/ai-and-healtchare-report"&gt;https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref43" name="_ftn43"&gt;&lt;sup&gt;&lt;sup&gt;[43]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Goudarzi, S., Hickok, E., and Sinha, A., AI in the Banking and Finance Industry in India,  Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance"&gt;https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref44" name="_ftn44"&gt;&lt;sup&gt;&lt;sup&gt;[44]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/files/ai-and-healtchare-report"&gt;https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref45" name="_ftn45"&gt;&lt;sup&gt;&lt;sup&gt;[45]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://news.microsoft.com/en-in/government-karnataka-inks-mou-microsoft-use-ai-digital-agriculture/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref46" name="_ftn46"&gt;&lt;sup&gt;&lt;sup&gt;[46]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://news.microsoft.com/en-in/government-telangana-adopts-microsoft-cloud-becomes-first-state-use-artificial-intelligence-eye-care-screening-children/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref47" name="_ftn47"&gt;&lt;sup&gt;&lt;sup&gt;[47]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; NITI Aayog. (2018). Discussion Paper on National Strategy for Artificial Intelligence. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper. 18&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref48" name="_ftn48"&gt;&lt;sup&gt;&lt;sup&gt;[48]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref49" name="_ftn49"&gt;&lt;sup&gt;&lt;sup&gt;[49]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref50" name="_ftn50"&gt;&lt;sup&gt;&lt;sup&gt;[50]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; J. Schradie, The Digital Production Gap: The Digital Divide and Web 2.0 Collide. Elsevier Poetics, 39 (1).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref51" name="_ftn51"&gt;&lt;sup&gt;&lt;sup&gt;[51]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; D Lazer, et al., The Parable of Google Flu: Traps in Big Data Analysis. Science. 343 (1).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref52" name="_ftn52"&gt;&lt;sup&gt;&lt;sup&gt;[52]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Danah Boyd and Kate Crawford,  Critical Questions for Big Data. Information, Communication &amp;amp; Society. 15 (5).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref53" name="_ftn53"&gt;&lt;sup&gt;&lt;sup&gt;[53]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; John Podesta, (2014) Big Data: Seizing Opportunities, Preserving Values, available at&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf"&gt;http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref54" name="_ftn54"&gt;&lt;sup&gt;&lt;sup&gt;[54]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; E. Ramirez, (2014) FTC to Examine Effects of Big Data on Low Income and Underserved Consumers at September Workshop, available at &lt;a href="http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers"&gt;http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref55" name="_ftn55"&gt;&lt;sup&gt;&lt;sup&gt;[55]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; M. Schrage, Big Data’s Dangerous New Era of Discrimination, available at &lt;a href="http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/"&gt;http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref56" name="_ftn56"&gt;&lt;sup&gt;&lt;sup&gt;[56]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Google/DoubleClick Merger case&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref57" name="_ftn57"&gt;&lt;sup&gt;&lt;sup&gt;[57]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; French Competition Authority, Opinion n°10-A-13 of 1406.2010,&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;http://www.autoritedelaconcurrence.fr/pdf/avis/10a13.pdf. That opinion of the Authority aimed at&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;giving general guidance on that subject. It did not focus on any particular market or industry&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;although it described a possible application of its analysis to the telecom industry.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref58" name="_ftn58"&gt;&lt;sup&gt;&lt;sup&gt;[58]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf"&gt;http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref59" name="_ftn59"&gt;&lt;sup&gt;&lt;sup&gt;[59]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., ... &amp;amp; Wood, A. (2017). Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref60" name="_ftn60"&gt;&lt;sup&gt;&lt;sup&gt;[60]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Frank A. Pasquale ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’ (July 14, 2017). Ohio State Law Journal, Vol. 78, 2017; U of Maryland Legal Studies Research Paper No. 2017-21, 7.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref61" name="_ftn61"&gt;&lt;sup&gt;&lt;sup&gt;[61]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Oswald, M., Grace, J., Urwin, S., &amp;amp; Barnes, G. C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Information &amp;amp; Communications Technology Law, 27(2), 223-250.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref62" name="_ftn62"&gt;&lt;sup&gt;&lt;sup&gt;[62]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref63" name="_ftn63"&gt;&lt;sup&gt;&lt;sup&gt;[63]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Abraham S., Hickok E., Sinha A., Barooah S., Mohandas S., Bidare P. M., Dasgupta S., Ramachandran V., and Kumar S., NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy. Retrieved from https://cis-india.org/internet-governance/files/niti-aayog-discussion-paper.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref64" name="_ftn64"&gt;&lt;sup&gt;&lt;sup&gt;[64]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Reisman D., Schultz J., Crawford K., Whittaker M., (2018, April) Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref65" name="_ftn65"&gt;&lt;sup&gt;&lt;sup&gt;[65]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Sample I., (2017, November 5) Computer says no: why making AIs fair, accountable and transparent is crucial. Retrieved from &lt;a href="https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial"&gt;https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref66" name="_ftn66"&gt;&lt;sup&gt;&lt;sup&gt;[66]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., &amp;amp; Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165, 633.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref67" name="_ftn67"&gt;&lt;sup&gt;&lt;sup&gt;[67]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.iso.org/iso/big_data_report-jtc1.pdf"&gt;http://www.iso.org/iso/big_data_report-jtc1.pdf&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda'&gt;https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Elonnai Hickok and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-05T15:39:59Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state">
    <title>India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board</title>
    <link>https://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state</link>
    <description>
        &lt;b&gt;The Aadhaar program offers a glimpse of the tech world's latest quest to control our lives, where dystopias are created in the name of helping the impoverished.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Paul Bluementhol and Gopal Sathe was published in &lt;a class="external-link" href="https://www.huffingtonpost.in/entry/india-aadhuar-tech-companies_us_5b7ebc53e4b0729515109fd0"&gt;Huffington Post&lt;/a&gt; on August 25, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Big U.S. technology  companies are involved in the construction of one of the most intrusive  citizen surveillance programs in history.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For the past nine years, India has  been building the world’s biggest biometric database by collecting the  fingerprints, iris scans and photos of nearly 1.3 billion people. For  U.S. tech companies like Microsoft, Amazon and Facebook, the project,  called Aadhaar (which means “proof” or “basis” in Hindi), could be a  gold mine.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The CEO of Microsoft has repeatedly praised the project, and local media have carried frequent reports on &lt;a href="https://m.economictimes.com/tech/hardware/uidai-wants-to-make-mobile-phones-aadhaar-enabled-holds-discussion-with-smartphone-makers/amp_articleshow/53441186.cms?__twitter_impression=true" rel="noopener noreferrer" target="_blank"&gt;consultations between the Indian government and senior executives&lt;/a&gt; from companies like Apple and Google (in addition to South Korean-based  Samsung) on how to make tech products Aadhaar-enabled. But when  reporters of HuffPost and HuffPost India asked these companies in the  past weeks to confirm they were integrating Aadhaar into their products,  only one company ― Google ― gave a definitive response.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That’s because Aadhaar has become  deeply controversial, and the subject of a major Supreme Court of India  case that will decide the future of the program as early as this month.  Launched nine years ago as a simple and revolutionary way to streamline  access to welfare programs for India’s poor, the database has become  Indians’ gateway to nearly any type of service ― from food stamps to a  passport or a cell phone connection. Practical errors in the system have caused &lt;a href="https://stateofaadhaar.in/report_pages/state-of-aadhaar-report-2017-18/" rel="noopener noreferrer" target="_blank"&gt;millions&lt;/a&gt; of poor Indians to lose out on aid. And the exponential growth of the  project has sparked concerns among security researchers and academics  that India is the first step toward setting up a surveillance society to  rival China.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;A Scheme Born In The U.S.&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Tapping into Aadhaar would help big  tech companies access the data and transactions of millions of users in  the second most populous country on earth, explained &lt;a href="https://www.huffingtonpost.in/2018/06/06/after-beta-testing-on-a-billion-indians-the-tech-behind-aadhaar-is-going-global_a_23452248/" rel="noopener noreferrer" target="_blank"&gt;Usha Ramanathan&lt;/a&gt;, a Delhi-based lawyer, legal researcher and one of Aadhaar’s most vocal critics.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The idea for India’s national  biometric identification team wasn’t unprecedented, and in fact, it has  strong parallels with a system proposed for the United States. Following  the Sept. 11, 2001, attacks, the CEO of Oracle, Larry Ellison, offered  to build the&lt;a href="https://www.computerworld.com/article/2583197/data-privacy/ellison-offers-free-software-for-national-id.html" rel="noopener noreferrer" target="_blank"&gt; U.S. government software&lt;/a&gt; for a national identification system that would include a centralized  computer database of all U.S. citizens. The program never got off the  ground amid objections from privacy and civil liberties advocates, but  India’s own Ellison figure, Nandan Nilekani, had a similar idea. The  billionaire founder of IT consulting giant Infosys, Nilekani  conceptualized Aadhaar as a way to eliminate waste and corruption in  India’s social welfare programs. He lobbied the government to bring in  Aadhaar, and went on to run the project under the administration of  Manmohan Singh. Nilekani gained even more influence under current Prime  Minister Narendra Modi, who moved to make Aadhaar necessary for almost  any kind of business in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The first 12-digit Aadhaar ID was  issued in 2010. Today, over a billion people (around 89 percent of  India’s population) have been included in the system ― from India’s  unimaginably wealthy billionaires to the homeless, from residents of the  country’s sprawling cities to remote inaccessible villages. While  initially a voluntary program, the database is now linked to just about  all government programs. You need an Aadhaar ID to get a &lt;a href="https://www.businesstoday.in/current/economy-politics/uidai-aadhaar-tatkal-passports-deadline-extension-order/story/272576.html" rel="noopener noreferrer" target="_blank"&gt;passport issued or renewed&lt;/a&gt;. Aadhaar was made mandatory for operating a bank account, using a cell phone or investing in mutual funds, only for the proposals to be rolled back pending the Supreme Court verdict on the constitutionality of the project.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As Aadhaar identification became  integrated into other systems like banking, cell phones and government  programs, tech companies can use the program to cross-reference their  datasets against other&lt;a href="https://www.hindustantimes.com/india-news/why-state-data-hubs-pose-a-risk-to-aadhaar-security/story-Klyl3yT5MkFk6Szg2yGg9N.html" rel="noopener noreferrer" target="_blank"&gt; databases&lt;/a&gt; and assemble a far more detailed and intrusive picture of Indians’  lives. That would allow them, for example, to better target products or  advertising to the vast Indian population. “You can take a unique  identifying number and use it to find data in different sectors,”  explained &lt;a href="https://www.huffingtonpost.in/2018/04/25/aadhaar-seeding-fiasco-how-to-geo-locate-every-minority-family-in-ap-with-one-click_a_23419643/" rel="noopener noreferrer" target="_blank"&gt;Pam Dixon&lt;/a&gt;,  executive director of the World Privacy Forum, an American public  interest research group. “That number can be cross-walked across all the  different parts of their life.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Microsoft, which uses  Aadhaar in a new version of Skype to verify users, declined to talk  about its work integrating products with the Aadhaar database. But Bill  Gates, Microsoft’s founder, &lt;a href="https://timesofindia.indiatimes.com/business/india-business/aadhaar-doesnt-pose-any-privacy-issue-gates/articleshow/64012833.cms" rel="noopener noreferrer" target="_blank"&gt;has publicly endorsed Aadhaar&lt;/a&gt; and his foundation is funding a World Bank program to bring Aadhaar-like  ID programs to other countries. Gates has also argued that ID  verification schemes like Aadhaar in itself don’t pose privacy issues.  Microsoft CEO Satya Nadella has repeatedly praised Aadhaar in both his  recent book and a &lt;a href="https://gadgets.ndtv.com/internet/features/satya-nadella-and-nandan-nilekani-talk-aadhaar-india-stack-ai-and-ar-1661798" rel="noopener noreferrer" target="_blank"&gt;tour across India&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amazon did not respond to a request for comment, but according to a &lt;a href="https://www.buzzfeednews.com/article/pranavdixit/amazon-is-asking-indians-to-hand-over-their-aadhaar-indias" rel="noopener noreferrer" target="_blank"&gt;BuzzFeed report&lt;/a&gt;, the company told Indian customers not  uploading a copy of Aadhaar “might result in a delay in the resolution  or no resolution” of cases where packages were missing.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Facebook, too, failed to respond to  repeated requests for comment, though the platform’s prompts for users  to log in with the same name as their Aadhaar card prompted suspicions from &lt;a href="https://gadgets.ndtv.com/social-networking/news/facebook-aadhaar-real-name-new-user-sign-up-onboarding-process-test-1792648" rel="noopener noreferrer" target="_blank"&gt;users&lt;/a&gt; that  it wanted everyone to use their Aadhaar-verified names and spellings so  they could later build in Aadhaar functionality with minimal problems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A spokesman for Google, which has its  own payments platform in India called Tez, told HuffPost that the  company has not integrated any of its products with Aadhaar. But there was outrage earlier in August when the Aadhaar helpline was added &lt;a href="https://www.indiatoday.in/technology/news/story/aadhaar-number-in-phones-uidai-google-clarification-1306344-2018-08-06" rel="noopener noreferrer" target="_blank"&gt;to Android phones without informing users&lt;/a&gt;. Google claimed in a statement to the &lt;a href="https://economictimes.indiatimes.com/news/politics-and-nation/uidai-row-google-says-it-inadvertently-coded-the-number/articleshow/65264353.cms" rel="noopener noreferrer" target="_blank"&gt;Economic&lt;i&gt; Times&lt;/i&gt;&lt;/a&gt; this happened “inadvertently”&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Privacy Jeopardized For Millions&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;But the same features that are set to  make tech companies millions are are also the ones that threaten the  privacy and security of millions of Indians.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“As long as [the data] is being  shared with so many people and services and companies, without knowing  who has what data, it will always be an issue,” said Srinivas Kodali, an  independent security researcher. “They can’t protect it until they  encrypt it and stop sharing data.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One government website allowed users to search and geolocate homes on the basis of &lt;a href="https://www.huffingtonpost.in/2018/04/25/aadhaar-seeding-fiasco-how-to-geo-locate-every-minority-family-in-ap-with-one-click_a_23419643/" rel="noopener noreferrer" target="_blank"&gt;caste and religion&lt;/a&gt; ― sparking fears of ethnic and religious violence in a country where  lynchings, beatings and mob violence are commonplace. Another website  broadcast the names, phone numbers and medical purchases — like generic  Viagra and HIV medication — of &lt;a href="https://www.huffingtonpost.in/2018/06/17/andhra-pradesh-tracked-you-as-you-bought-viagra-then-put-your-name-and-phone-number-on-the-internet-for-the-world-to-see_a_23459943/" rel="noopener noreferrer" target="_blank"&gt;anyone who buys medicines&lt;/a&gt; from government stores. &lt;a href="https://www.huffingtonpost.in/2018/07/11/indias-latest-data-leak-is-so-basic-that-peoples-aadhaar-number-bank-account-and-fathers-name-are-just-one-google-search-away_a_23479694/" rel="noopener noreferrer" target="_blank"&gt;In another leak&lt;/a&gt;, a Google search for phone numbers of farmers in Andhra Pradesh would reveal their Aadhaar numbers, address, fathers’ names and bank account numbers.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The leaks are aggravated by “a Star  Trek-type obsession” with data dashboards, said Sunil Abraham, executive  director of the Center for Internet and Society. Many government  departments each created an online data dashboard with detailed personal  records on individuals, he explained. The massive centralization of  personal data, he said, &lt;a href="https://www.huffingtonpost.in/2018/07/23/how-andhra-pradesh-built-indias-first-police-state-using-aadhaar-and-a-census_a_23487838/" rel="noopener noreferrer" target="_blank"&gt;created a huge security risk&lt;/a&gt; as these dashboards were accessible to any government official and in many cases, were even left open to the public.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Authentication failures have led to deaths among the poorest sections of Indian society &lt;a href="https://timesofindia.indiatimes.com/city/ranchi/7-hunger-deaths-related-to-aadhaar/articleshow/64695700.cms" rel="noopener noreferrer" target="_blank"&gt;when people were denied government food rations&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;And much like the tech companies,  some local governments are using the system to connect data sets and  build expansive surveillance. In the state of Andhra Pradesh in India,  there’s a &lt;a href="https://www.huffingtonpost.in/2018/07/23/how-andhra-pradesh-built-indias-first-police-state-using-aadhaar-and-a-census_a_23487838/" rel="noopener noreferrer" target="_blank"&gt;war room next to the state chief minister’s office&lt;/a&gt;,  where a wall of screens shows details from databases that collect  information from every department. There are security cameras and  dashboards that track every mention of the chief minister on the news.  There’s a separate team watching what’s being said about him on social  media and there are also dashboards that collect information from IoT  [Internet of Things] sensors across the state.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Court Ruling Could Halt Rollout&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Those issues around privacy are why  the dreams of government bureaucrats and large tech companies to build a  perfect surveillance apparatus around Aadhaar may ultimately fall  apart. The Supreme Court of India is set to decide on a case that could  decide the future of the program.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The court is set to review 27 petitions, including whether requiring  an Aadhaar for government subsidies and benefits makes access to these  programs conditional, even though the state is constitutionally bound to  deliver them. The petitioners include lawyers, academics and a  92-year-old retired judge whose petition also secured the right to  privacy as a fundamental right in August 2017. Petitioners also argue  that the ability for Aadhaar to be used to track and profile people is  unconstitutional.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In its judgment, due any day now, the court will rule on all 27  petitions together. It will decide not only the fate of the Aadhaar Act  of 2016, but likely the future involvement of some of tech’s biggest  companies in one of the world’s most ambitious and divisive IT projects.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state'&gt;https://cis-india.org/internet-governance/news/huffington-post-august-25-2018-paul-bluementhal-and-gopal-sathe-indias-biometric-database-is-creating-a-perfect-surveillance-state&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-09-04T14:40:51Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india">
    <title>The Srikrishna Committee Data Protection Bill and Artificial Intelligence in India</title>
    <link>https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india</link>
    <description>
        &lt;b&gt;Artificial Intelligence in many ways is in direct conflict with traditional data protection principles and requirements including consent, purpose limitation, data minimization, retention and deletion, accountability, and transparency.&lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;Privacy Considerations in AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc.  have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.&lt;a href="#_ftn3" name="_ftnref3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this article we attempt to analyse if and how the Srikrishna committee draft Bill  and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;The GDPR and Artificial Intelligence&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP  as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.&lt;a href="#_ftn4" name="_ftnref4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Data rights: &lt;/b&gt; The GDPR enables individuals with a number of  data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling.  The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them.  There are three exceptions to this right - if the automated decision making is:  a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.&lt;a href="#_ftn5" name="_ftnref5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;b&gt;Transparency:&lt;/b&gt; Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.&lt;a href="#_ftn6" name="_ftnref6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Importantly, this requirement has the potential of ensuring that companies do not operate complete  ‘black box’ algorithms within their business processes.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Fairness: &lt;/b&gt;The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and  ensuring that processing that does not result in negative discriminatory results.&lt;a href="#_ftn7" name="_ftnref7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;b&gt;Purpose Limitation:&lt;/b&gt; The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for  specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes.  Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.&lt;a href="#_ftn8" name="_ftnref8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;b&gt;Privacy by Design and Default:&lt;/b&gt; Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.&lt;a href="#_ftn9" name="_ftnref9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Data Protection Impact Assessments:&lt;/b&gt; Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.&lt;a href="#_ftn10" name="_ftnref10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Security:&lt;/b&gt; Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization. &lt;/li&gt;
&lt;/ol&gt;
&lt;h3 style="text-align: justify; "&gt;Srikrishna Committee Bill and AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Definition of Harm:&lt;/b&gt; The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.&lt;/li&gt;
&lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of  significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Data Rights:&lt;/b&gt; Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,&lt;a href="#_ftn11" name="_ftnref11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; the right to opt out of automated decision making&lt;a href="#_ftn12" name="_ftnref12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;, and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.&lt;a href="#_ftn13" name="_ftnref13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Data Quality&lt;/b&gt;: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.&lt;a href="#_ftn14" name="_ftnref14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Principle of Privacy by Design:&lt;/b&gt; Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;A number of these (a, d, e, and g)  require that the interest of the data principal is accounted for throughout the processing of personal data, This will be  significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.&lt;a href="#_ftn15" name="_ftnref15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Data Protection Impact Assessment:&lt;/b&gt; Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely.  This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.&lt;a href="#_ftn16" name="_ftnref16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Classification of data fiduciaries as significant data fiduciaries&lt;/b&gt;: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.&lt;a href="#_ftn17" name="_ftnref17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Restrictions on cross border transfer of personal data: &lt;/b&gt;Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;It is interesting to note that in the context of cross border sharing of data,  the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.&lt;a href="#_ftn18" name="_ftnref18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Powers and functions of the Authority&lt;/b&gt;: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.&lt;a href="#_ftn19" name="_ftnref19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Fair and reasonable processing: &lt;/b&gt;Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.&lt;a href="#_ftn20" name="_ftnref20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt; 
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Purpose Limitation: &lt;/b&gt;Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol style="text-align: justify; "&gt; &lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,&lt;a href="#_ftn21" name="_ftnref21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and instead creates an exception for research, archiving, or statistical purposes.&lt;a href="#_ftn22" name="_ftnref22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The DPA has the responsibility of developing codes defining research purposes under the act.&lt;a href="#_ftn23" name="_ftnref23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt;
&lt;li&gt;&lt;b&gt;Security Safeguards:&lt;/b&gt; Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.&lt;a href="#_ftn24" name="_ftnref24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p style="text-align: justify; "&gt;Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna  uses Bill uses term de-identification.  The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the  risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. &lt;a href="#_ftn25" name="_ftnref25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Technical perspectives of Privacy and AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws&lt;a href="#_ftn26" name="_ftnref26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;, as well as research into AI privacy enhancing technologies.&lt;a href="#_ftn27" name="_ftnref27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.&lt;a href="#_ftn28" name="_ftnref28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority&lt;a href="#_ftn29" name="_ftnref29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; break such methods into three categories:&lt;/p&gt;
&lt;ol style="text-align: justify; "&gt;
&lt;li&gt;Techniques for reducing the need for large amounts of training data: Such techniques  can include&lt;/li&gt;
&lt;ol&gt;
&lt;li&gt;&lt;b&gt;Generative adversarial networks (GANs):&lt;/b&gt; GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Federated Learning:&lt;/b&gt; Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the  centralized model.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Matrix Capsules&lt;/b&gt;: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.&lt;a href="#_ftn30" name="_ftnref30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;li&gt;Techniques that uphold data protection without reducing the basic data set&lt;/li&gt;
&lt;ol&gt;
&lt;li&gt;&lt;b&gt;Differential Privacy&lt;/b&gt;: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Homomorphic Encryption:&lt;/b&gt; Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Transfer Learning&lt;/b&gt;: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed. &lt;/li&gt;
&lt;li&gt;&lt;b&gt;RAIRD&lt;/b&gt;: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.&lt;a href="#_ftn31" name="_ftnref31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;li&gt;Techniques to move beyond opaque algorithms&lt;/li&gt;
&lt;ol&gt;
&lt;li&gt;&lt;b&gt;Explainable AI (XAI): &lt;/b&gt;DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.&lt;a href="#_ftn32" name="_ftnref32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.&lt;a href="#_ftn33" name="_ftnref33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Local Interpretable Model Agnostic Explanations&lt;/b&gt;: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.&lt;a href="#_ftn34" name="_ftnref34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt; &lt;/ol&gt;
&lt;h3 style="text-align: justify; "&gt;Public Sector use of AI and Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of &lt;i&gt;AK Gopalan&lt;/i&gt; there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through &lt;i&gt;RC Cooper&lt;/i&gt;&lt;a href="#_ftn35" name="_ftnref35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and &lt;i&gt;Maneka&lt;/i&gt;&lt;a href="#_ftn36" name="_ftnref36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;, the Supreme Court took steps to reverse the majority opinion in &lt;i&gt;Gopalan&lt;/i&gt; and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in &lt;i&gt;Kharak Singh&lt;/i&gt; where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in &lt;i&gt;Kharak Singh&lt;/i&gt; draws heavily from&lt;i&gt; Munn&lt;/i&gt; v. &lt;i&gt;Illinois&lt;/i&gt;&lt;a href="#_ftn37" name="_ftnref37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; which held life to be “more than mere animal existence.” Curiously, after taking this position &lt;i&gt;Kharak Singh&lt;/i&gt; fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in &lt;i&gt;Kharak Singh&lt;/i&gt; was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. &lt;i&gt;Maneka&lt;/i&gt; which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the  consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using  AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://privacyinternational.org/topics/artificial-intelligence&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-22-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-14-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-25-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref10" name="_ftn10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref11" name="_ftn11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-21-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref12" name="_ftn12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-22-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref13" name="_ftn13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://gdpr-info.eu/art-14-gdpr/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref14" name="_ftn14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;Draft Data Protection Bill 2018 -  Chapter II section 9&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref15" name="_ftn15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter VII section 29&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref16" name="_ftn16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter VII section 33&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref17" name="_ftn17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter VII section 38&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref18" name="_ftn18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter VIII section 40&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref19" name="_ftn19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter X section 60&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref20" name="_ftn20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter II section 4&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref21" name="_ftn21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 - Chapter II section 5&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref22" name="_ftn22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 -  Chapter IX Section 45&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref23" name="_ftn23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 - Chapter XIV section 97&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref24" name="_ftn24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Draft Data Protection Bill 2018 - Chapter VII section 31&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref25" name="_ftn25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref26" name="_ftn26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref27" name="_ftn27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref28" name="_ftn28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref29" name="_ftn29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref30" name="_ftn30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.artificial-intelligence.blog/news/capsule-networks&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref31" name="_ftn31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://raird.no/about/factsheet.html&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref32" name="_ftn32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.darpa.mil/attachments/XAIProgramUpdate.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref33" name="_ftn33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.darpa.mil/attachments/XAIProgramUpdate.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref34" name="_ftn34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref35" name="_ftn35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;R C Cooper&lt;/i&gt; v. &lt;i&gt;Union of India&lt;/i&gt;, 1970 SCR (3) 530.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref36" name="_ftn36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Maneka Gandhi&lt;/i&gt; v. &lt;i&gt;Union of India&lt;/i&gt;, 1978 SCR (2) 621.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref37" name="_ftn37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; 94 US 113 (1877).&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india'&gt;https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha and Elonnai Hickok</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-03T13:29:12Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/jobs/cis-policy-officer-internet-governance">
    <title>Policy Officer - Internet Governance </title>
    <link>https://cis-india.org/jobs/cis-policy-officer-internet-governance</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society is seeking an individual with a background and interest in issues pertaining to IG including privacy, big data, FoE, AI etc. under its Internet Governance programme. &lt;/b&gt;
        &lt;p&gt;This position will include undertaking field research,  developing policy briefs, organizing conferences, and writing research  reports, engaging with key stakeholders, and collaborating with project  partners in areas under our research.&lt;/p&gt;
&lt;p dir="ltr"&gt;Note:  This position is for a duration of 1 year. There is currently one  vacancy for this post. Selected candidate will work from CIS office in  Bangalore.&lt;/p&gt;
&lt;h3&gt;Required Skill Sets&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Previous work and an interest in issues pertaining to IG including privacy, big data, FoE, and AI.&lt;/li&gt;
&lt;li&gt;Strong writing and analytical skills.&lt;/li&gt;
&lt;li&gt;Experience in conducting research.&lt;/li&gt;
&lt;li&gt;Knowledge of Indian law and policy relevant to the digital sphere.&lt;/li&gt;
&lt;li&gt;Demonstrable research skills and ability to undertake research independently.&lt;/li&gt;
&lt;li&gt;Strong communication skills.&lt;/li&gt;
&lt;li&gt;Ability to work independently or with minimal supervision.&lt;/li&gt;
&lt;/ol&gt; 
&lt;hr /&gt;
&lt;p&gt;&lt;b&gt;Compensation:&lt;/b&gt; Based on experience and education. &lt;br /&gt;&lt;b&gt;Application requirements:&lt;/b&gt; two writing samples and CV&lt;br /&gt;&lt;b&gt;Contact:&lt;/b&gt; &lt;a class="mail-link" href="mailto:swaraj@cis-india.org?subject=Policy Officer - Internet Governance"&gt;swaraj@cis-india.org&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/jobs/cis-policy-officer-internet-governance'&gt;https://cis-india.org/jobs/cis-policy-officer-internet-governance&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Jobs</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-09-03T06:58:25Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/symposium-on-india2019s-cyber-strategy">
    <title>Symposium on India’s Cyber Strategy</title>
    <link>https://cis-india.org/internet-governance/events/symposium-on-india2019s-cyber-strategy</link>
    <description>
        &lt;b&gt;CIS organised a Symposium on India’s Cyber Strategy.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The event saw a total of around 30 participants from industry, academia, law/policy, media, and civil society, and had a panel comprised of Asoke Mukerji, Madhulika Srikumar, and Parminder Jeet Singh.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Presentations&lt;/h3&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/cis-presentation-on-cyber-security"&gt;India’s Strategic Interests in the Norms Setting Process in Cyberspace&lt;/a&gt; (Presentation by Ambassador Asoke Kumar Mukerji, Former Permanent Representative of India to the United Nations)&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/arindrajit-presentation"&gt;The Potential for the Normative Regulation of Cyberspace&lt;/a&gt; (Presentation by Arindrajit Basu)&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/symposium-on-india2019s-cyber-strategy'&gt;https://cis-india.org/internet-governance/events/symposium-on-india2019s-cyber-strategy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Cyber Security</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-10-02T06:02:59Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018">
    <title>World Library and Information Congress 2018</title>
    <link>https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018</link>
    <description>
        &lt;b&gt;Swaraj Paul Barooah was a speaker at two panels during the World Library and Information Congress 2018 (WLIC2018), organised by the International Federation of Library Associations and Institutions (IFLA) in Kuala Lumpur on August 26 and 27, 2018.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Swaraj's first panel, titled "Intellectual Freedom in a Polarised World" was selected as one of 9 sessions to be live-streamed and recorded, out of 249 sessions in total. The recording can be accessed on &lt;a class="external-link" href="https://www.youtube.com/watch?v=0HujFHQn1zY"&gt;YouTube&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Session 123 Intellectual Freedom in a Polarised             World - Freedom of Access to Information and Freedom of             Expression (FAIFE) Advisory Committee (SI)&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Chair: Martyn Wade, United Kingdom&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In many national contexts, citizens are             seen to be either “with the government or against it,”             leaving little opportunity to freely and safely express more             nuanced views of current social, political or economic             issues. While notable authoritarian regimes quite             transparently monitor and limit societal discussion, others,             ostensibly democratic, may work in practice to blunt             potentially unfavourable social commentary on the pretence             of defending political stability or public morality. IFLA’s             Freedom of Access to Information and Freedom of Expression             (FAIFE) Advisory Committee explores this phenomenon--and the             potential role of civil society and information             professionals in advancing freedom of expression--through             the experience and insights of an NGO leader, an academic             public intellectual, and an officer of UNESCO.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Presentations&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Internet and the freedom of expression in Indonesia: opportunity and challenges - Indriaswati Dyah Saptaningrum, University of New South Wales; former Executive Director of the ELSAM human rights organization (Indonesia), Australia&lt;/li&gt;
&lt;li&gt;Freedom of Expression in Malaysia - Azmi Bin Sharom, Faculty of Law, University of Malaysia, Malaysia&lt;/li&gt;
&lt;li&gt;What's up with WhatsApp - polarisation and lynchings in India - Swaraj Paul Barooah, The Centre for Internet and Society, India&lt;/li&gt;
&lt;li&gt;How to align national laws with international standards on freedom of expression? - Ming-Kuok Lim, Programme Specialist for Communication and Information, UNESCO, Indonesia&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;br /&gt;&lt;b&gt;Session 140 To Have and not to Hold: The End of Ownership - CLM and FAIFE&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The shift from buying physical library media to licensing digital content has profound impacts on the way libraries acquire and give access to content. From e-books that can disappear at the whim (or the mistake) of the owners of a server far away, to the limits on sharing and archiving imposed by some contracts. From the potential monitoring of reader behaviour, to the criminalisation of those who simply want to improve user experience. The dominance of digital media in information provision has both broadened the field of information to which we have access, but potentially made it shallower in terms of the use that libraries, and their users, can make of it. The joint CLM-FAIFE session will look at the question of the end of ownership from a legal and an ethical point of view, drawing on the experience and knowledge of the two communities.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tomas A. Lipinski, School of Information Studies, University of Wisconsin, Milwaukee, USA – The Limits of Licensing.&lt;/li&gt;
&lt;li&gt;Ann Okerson, Centre for Research Libraries, Chicago, USA – The Possibilities of Licensing.&lt;/li&gt;
&lt;li&gt;Swaraj Paul Barooah, Centre for Internet and Society – The Balance among Licenses and Exceptions and Limitations to Copyright.&lt;/li&gt;
&lt;li&gt;Brent Roe - Laurentian University, Sudbury, Canada – Privacy Concerns and Other Side Effects of Licensing.&lt;/li&gt;
&lt;li&gt;Jonathan Hernandez-Perez, Researcher, Instituto de Investigaciones Bibilotecologicas, UNAM, Mexico City, Mexico (Invited) – Special Issues in the Developing World; Open Access as a Recapturing of Ownership.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018'&gt;https://cis-india.org/internet-governance/news/world-library-and-information-congress-2018&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2018-08-31T02:23:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment">
    <title>Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment</title>
    <link>https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment</link>
    <description>
        &lt;b&gt;Shweta Mohandas was a panelist at the event, "Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment", organised by Indian Council for Research on International Economic Relations, and the Centre for Communication Governance at National Law University Delhi. It took place on Friday, 24 August 2018 at India International Centre, New Delhi.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The event began with Dr. Usha Ramanathan's Opening remarks on the State of Privacy in India &amp;amp; the Challenges to Realising Puttaswamy’s Promise. This was then followed by two panel discussions, the first on Data Protection for a Free and Fair Digital Economy and the second on the Legacy of the Justice K.S. Puttaswamy v. Union of India Judgment. Shweta participated in the second panel.  More details of the event &lt;a class="external-link" href="https://ccgnludelhi.wordpress.com/2018/08/22/celebrating-one-year-of-the-puttaswamy-judgment-august-24-6-00-pm-iic/"&gt;here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment'&gt;https://cis-india.org/internet-governance/news/celebrating-one-year-of-the-justice-k-s-puttaswamy-v-union-of-india-judgment&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-08-30T02:53:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future">
    <title>20 years of Google: Privacy, fake news and the future</title>
    <link>https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future</link>
    <description>
        &lt;b&gt;Google once directed you to information. Today, it’s often the source of information, using data you and others have shared, often without you realising it. Public knowledge goes where Google takes it. And 20 years on, not everyone’s happy with the journey.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Rachel Lopez was published in &lt;a class="external-link" href="https://www.hindustantimes.com/india-news/20-years-of-google-privacy-fake-news-and-the-future/story-0jmwFxnhwz8lWFUCbMxBjM.html"&gt;Hindustan Times&lt;/a&gt; on August 26, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Happy Birthday, Google. The search engine is 20 this year, and what a  ride it’s been! When Sergey Brin and Larry Page were developing  software that &lt;a href="https://www.hindustantimes.com/india-news/20-years-of-google-when-information-was-not-just-a-click-away/story-aIDWzxXMQd10ShuhL62vcI.html" target="_blank"&gt;searched better and loaded faster &lt;/a&gt;than Explorer, Navigator and AltaVista, the web  itself consisted of just 1 lakh websites.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google’s  mission statement was succinct: To organise the world’s information and  make it universally accessible. Their corporate code of conduct was  even simpler: Don’t be evil.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Perhaps even Google didn’t realise  where its mission would take it. The following decade brought Google  News, Gmail, Maps and Chrome. By 2014, the internet had grown to 1  billion websites. The search engine, their core product, had become the  default homepage of the Internet.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In May this year, Google quietly  dropped the ‘Don’t be evil’ tag. The same month, its Android operating  system crossed 2 billion monthly active devices. &lt;a href="https://www.hindustantimes.com/india-news/20-years-of-google-there-s-something-for-everyone-here/story-eS5rDm76QFNgZIXwY3kGuM.html" target="_blank"&gt;Seven products (including YouTube and Google Play&lt;/a&gt;) now reach a combined 1 billion users.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google  once directed you to information. Today, it’s often the source of  information (in ads and top-of-the-page blocs), using data you and  others have shared, often without you realising it. Public knowledge  goes where Google takes it. And 20 years on, not everyone’s happy with  the &lt;a href="https://www.hindustantimes.com/india-news/20-years-of-google-the-journey-to-omnipresence/story-Ehr55MBGNOV0j3Jd9XhdyO.html" target="_blank"&gt;journey&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The  key concern is that Google has grown so big,” says Pranesh Prakash,  policy director at Bangalore’s Centre for Internet &amp;amp; Society. “It’s  like the classic line from [Spiderman’s] Uncle Ben: With great power  comes great responsibility. In Google’s case, its great size is what  brought great power to begin with.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For billions of Google users, the biggest concerns are now of &lt;a href="https://www.hindustantimes.com/india-news/i-believe-the-most-exciting-moment-for-google-in-india-hasn-t-happened-yet-rajan-anandan/story-8goKIyIadDBKit0wyz7xYP.html" target="_blank"&gt;privacy and accountability&lt;/a&gt;,  says Nikhil Pahwa, founder of Medianama, which analyses digital and  telecom businesses. “There are few checks on Google’s ability to take,  retain and process information from users,” he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Hits and misses&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;For Google, all is going according to plan. Its search engine is now  smart enough to complete your sentences. It’s learning constantly from  what you search for, watch, spend on, share and regret; it knows your  commute and your vacation plans. And it’s profiting from this knowledge.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In  the UK, Google is being sued for bypassing iPhone privacy settings to  track and collect data from 4.4 million users in 2011 and 2012.  Information on race, physical and mental health, political leanings,  sexuality, shopping habits and locations was apparently used to build  advertising categories. Google also creates products for the US  government, and has user data from around the world. “Any entity that  has this much insight into us, and is in a position to use it, whether  for the government or commercial gain, is cause for worry,” says  Prakash. Most users aren’t worried, and that’s worrying too. We don’t  realise how much data is being tracked or collected. The more we share,  the more useful Google gets, and the greater its potential for misuse,  for mapping say, beef-eaters, online dissenters, LGBT supporters or  single women who work late.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Internet’s other giant, Facebook,  recently suspended 400 apps over  privacy concerns, admitting that 87  million users may have had data  compromised in 2016. Meanwhile, even  non-Google apps are capable of  hijacking data using software developed  by Google. Weather apps look at  your photo gallery, ride-sharing  software keep tracking you after the  ride, games are checking out your  texts as you play. Gmail knows your  flight timings, how many steps  you’ve walked, and your last bank  transaction.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Search for tomorrow&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Perhaps the biggest concerns are with Google’s artificial intelligence technology, the brand’s great leap forward fuelled by its massive data reserves. The tech is already being criticised for being fed biased data, creating global services that mirror the prejudices of an insular, mostly white, mostly male, tech industry.&lt;br /&gt;&lt;br /&gt;Sara Wachter-Boettcher, author of Technically Wrong, which looks at how technology reflects sexism and the biases of the people that create it, says this creates problems. “Google develops tools that other tech companies rely on to build other products,” she says. So its biases spread to other products too. As machines learn, Google is starting to unlearn too.&lt;br /&gt;&lt;br /&gt;“Machine unlearning is basically recognising when a machine has learned something inaccurate, or biased, and then erasing that learning,” says Wachter-Boettcher. In Africa, the company (along with Facebook) now funds a Masters course in machine intelligence to improve the industry’s diversity. Last year, Google took its first steps to curb fake news hits on its search engines with tools that allow users to report misleading or offensive content.&lt;br /&gt;&lt;br /&gt;But perhaps it’s time to work towards a future in which Google will be monitored in real time, in different countries, rather than depending on the company to offer a fix after a misstep. Prakash believes that the way forward is reimagining an Internet where Google isn’t the first and last word on everything. “This doesn’t mean more companies like Google but searching that happens in a more decentralised way,” he says. “We need to save the web from large monopolies in the long run.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future'&gt;https://cis-india.org/internet-governance/news/hindustan-times-rachel-lopez-august-26-2018-20-years-of-google-privacy-fake-news-and-future&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-08-30T02:49:06Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/unescap-google-ai-meeting">
    <title>UNESCAP Google AI Meeting</title>
    <link>https://cis-india.org/internet-governance/news/unescap-google-ai-meeting</link>
    <description>
        &lt;b&gt;Arindrajit was a panelist at the event on AI in public service delivery hosted by UNESCAP Bangkok on August 29, 2018. The event was co-organized by Economic and Social Commission for Asia and the Pacific and Google.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The discussion centered around the two questions (1) Is AI different from other technological advancements in the past and (2) Recommendations for policy-makers to enhance AI in Public Service Delivery.The other panelists were Dr. Urs Gasser (Berkman), Vidushi Marda ( Art.19), Malavika Jayaram (Digital  Asia Hub) and Jake Lucchi ( Google) The panel was a platform to discuss some of our findings in our case studies on healthcare and agriculture, which we will receive comments on and will get published in November.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/unescap-google-ai-meeting'&gt;https://cis-india.org/internet-governance/news/unescap-google-ai-meeting&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-20T15:47:42Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
