<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 231 to 245.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/e2de2de01e41e1ae1ae23e30e1ae1ae02e49e2de21e39e25e1be23e30e0ae32e0ae19e14e34e08e34e17e31e25-e04e38e22e01e31e1ae1ce39e49e40e0ae35e48e22e27e0ae32e0de2be32e41e19e27e17e32e07e40e2be21e32e30e2ae21"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/bis-litd-17-meeting"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-wire-mira-swaminathan-and-shweta-reddy-july-20-2019-old-isnt-always-gold-face-app-and-its-privacy-policies"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/huffington-post-gopal-sathe-july-4-2019-fintech-apps-privacy-snooping-credit-vidya"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/didp-34-on-granular-detail-on-icanns-budget-for-policy-development-process"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/pibplans-a-fact-checking-unit-to-counter-fake-news"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/digital-id-forum-2019"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/about/newsletters/june-2019-newsletter"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/icann-65"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective">
    <title>What is the problem with ‘Ethical AI’? An Indian Perspective</title>
    <link>https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective</link>
    <description>
        &lt;b&gt;On 22 May 2019, the OECD member countries adopted the OECD Council Recommendation on Artificial Intelligence. The Principles, meant to provide an “ethical framework” for governing Artificial Intelligence (AI), were the first set of guidelines signed by multiple governments, including non-OECD members: Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Arindrajit Basu and Pranav M.B. was &lt;a class="external-link" href="https://cyberbrics.info/what-is-the-problem-with-ethical-ai-an-indian-perspective/"&gt;published by cyberBRICS&lt;/a&gt; on July 17, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;This was followed by the &lt;a href="https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf" rel="noreferrer noopener" target="_blank"&gt;G20 adopted human-centred AI Principles&lt;/a&gt; on June 9th. These are the latest in a slew of (&lt;a href="https://clinic.cyber.harvard.edu/2019/06/07/introducing-the-principled-artificial-intelligence-project/" rel="noreferrer noopener" target="_blank"&gt;at least 32!&lt;/a&gt;) public, and private ‘Ethical AI’ initiatives that seek to use ethics to guide the development, deployment and use of AI in a variety of use cases. They were conceived as a response to a range of concerns around algorithmic decision-making, including discrimination, privacy, and transparency in the decision-making process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, a noteworthy recent document that attempts to address these concerns is the &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;National Strategy for Artificial Intelligence&lt;/a&gt; published by the National Institution for Transforming India, also called &lt;em&gt;NITI Aayog&lt;/em&gt;, in June 2018. As the NITI Aayog Discussion paper acknowledges, India is the fastest growing economy with the second largest population in the world and has a significant stake in understanding and taking advantage of the AI revolution. For these reasons the goal pursued by the strategy is to establish the National Program on AI, with a view to guiding the research and development in new and emerging technologies, while addressing questions on ethics, privacy and security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While such initiatives and policy measures are critical to promulgating discourse and focussing awareness on the broad socio-economic impacts of AI, we fear that they are dangerously conflating tenets of existing legal principles and frameworks, such as human rights and constitutional law, with ethical principles – thereby diluting the scope of the former. While we agree that ethics and law can co-exist, ‘Ethical AI’ principles are often drafted in a manner that posits as voluntary positive obligations various actors have taken upon themselves as opposed to legal codes they necessarily have to comply with.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To have optimal impact, ‘Ethical AI’ should serve as a decision-making framework only in specific instances when human rights and constitutional law do not provide a ready and available answer.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Vague and unactionable&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Conceptually, ‘Ethical AI’ is a vague set of principles that are often difficult to define objectively. In this perspective, academics like Brett Mittelstadt of the Oxford Internet Institute &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293" rel="noreferrer noopener" target="_blank"&gt;argues&lt;/a&gt; that unlike in the field of medicine – where ethics has been used to design a professional code, ethics in AI suffers from four core flaws. First, developers lack a common aim or fiduciary duty to a consumer, which in the case of medicine is the health and well-being of the patient. Their primary duty lies to the company or institution that pays their bills, which often prevents them from realizing the extent of the moral obligation they owe to the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second is a lack of professional history which can help clarify the contours of well-defined norms of ‘good behaviour.’ In medicine, ethical principles can be applied to specific contexts by considering what similarly placed medical practitioners did in analogous past scenarios. Given the relative nascent emergence of AI solutions, similar professional codes are yet to develop.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Third is the absence of workable methods or sustained discourse on how these principles may be translated into practice. Fourth, and we believe most importantly, in addition to ethical codes, medicine is governed by a robust and stringent legal framework and strict legal and accountability mechanisms, which are absent in the case of ‘Ethical AI’. This absence gives both developers and policy-makers large room for manoeuvre.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, such focus on ethics may be a means of avoiding government regulation and the arm of the law. Indeed, due to its inherent flexibility and non-binding nature, ethics can be exploited as a piecemeal red herring solution to the problems posed by AI. Controllers of AI development are often profit-driven private entities, that gain reputational mileage by using the opportunity to extensively deliberate on broad ethical notions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Under the guise of meaningful ‘self-regulation’, several organisations publish internal ‘Ethical AI’ guidelines and principles, and &lt;a href="https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics"&gt;fund ethics research&lt;/a&gt; across the globe. In doing so, they occlude the shackles of binding obligation and deflect from attempts at tangible regulation.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Comparing Law to Ethics&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;This is in contrast to the well-defined jurisprudence that human rights and constitutional law offer, which should serve as the edifice of data-driven decision making in any context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the table below, we try to explain this point by looking at how three core fundamental rights enshrined both in our constitution and human rights instruments across the globe-right to privacy, right to equality/right against discrimination and due process-find themselves captured in three different sets of ‘Ethical AI frameworks.’ One of these inter-governmental &lt;a href="https://www.oecd.org/going-digital/ai/principles/" rel="noreferrer noopener" target="_blank"&gt;(OECD)&lt;/a&gt;, one devised by a private sector actor (‘&lt;a href="https://ai.google/principles/" rel="noreferrer noopener" target="_blank"&gt;Google AI&lt;/a&gt;’) and one by our very own, &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;NITI AAYOG.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cyberbrics.info/wp-content/uploads/2019/07/image.png" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With the exception of certain principles,most ‘Ethical AI’ principles are loosely worded as ‘‘seek to avoid’, ‘give opportunity for’, or ‘encourage’. A notable exception is the NITI AAYOG’s approach to protecting privacy in the context of AI. The document explicitly recommends the establishment of a national data protection framework for data protection, sectoral regulations that apply to specific contexts with the consideration of international standards such as GDPR as benchmarks. However, it fails to reference available constitutional standards when it discusses bias or explainability.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Several similar legal rules that have been enshrined in legal provisions -outlined and elucidated through years of case law and academic discourse – can be utilised to underscore and guide AI principles. However, existing AI principles do not adequately articulate how the legal rule can actually be applied to various scenarios by multiple organisations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We do not need a new “Law of Artificial Intelligence” to regulate this space. Judge Frank Easterbrook’s famous 1996 proclamation on the &lt;a href="https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&amp;amp;httpsredir=1&amp;amp;article=2147&amp;amp;context=journal_articles"&gt;‘Law of the Horse’&lt;/a&gt; through which he opposed the creation of a niche field of ‘cyberspace law’ comes to mind. He argued that a multitude of legal rules deal with ‘horses’, including the sale of horses, individuals kicked by horses, and with the licensing and racing of horses. Like with cyberspace, any attempt to arrive at a corpus of specialised ‘law of the horse’ would be shallow and ineffective.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Instead of fidgeting around for the next shiny regulatory tool, industry, practitioners, civil society and policy makers need to get back to the drawing board and think about applying the rich corpus of existing jurisprudence to AI governance.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;What is the role for ‘Ethical AI?’&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;What role can ‘ethical AI’ then play in forging robust and equitable governance of Artificial Intelligence? As it does in all other societal avenues, ‘ethical AI’ should serve as a framework for making legitimate algorithmic decisions in instances where law might not have an answer. An example of such a scenario is the &lt;a href="https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/" rel="noreferrer noopener" target="_blank"&gt;Project Maven saga&lt;/a&gt; – where 3,000 Google employees signed a petition opposing Google’s involvement with a US Department of Defense project by claiming that Google should not be involved in “the business of war.” There is no law-international or domestic that suggests that Project Maven-which was designed to study battlefield imagery using AI, was illegal. However, the debate at Google proceeded on ethical grounds and on the application of the ‘Ethical AI’ principles to this present context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We realise the importance of social norms and mores in carving out any regulatory space. We also appreciate the role of ethics in framing these norms for responsible behaviour. However, discourse across civil society, academic, industry and government circles all across the globe needs to bring law back into the discussion as a framing device. Not doing so risks diluting the debate and potential progress to a set of broad, unactionable principles that can easily be manipulated for private gain at the cost of public welfare.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective'&gt;https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Pranav M.B.</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T14:57:08Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/e2de2de01e41e1ae1ae23e30e1ae1ae02e49e2de21e39e25e1be23e30e0ae32e0ae19e14e34e08e34e17e31e25-e04e38e22e01e31e1ae1ce39e49e40e0ae35e48e22e27e0ae32e0de2be32e41e19e27e17e32e07e40e2be21e32e30e2ae21">
    <title>ออกแบบระบบข้อมูลประชาชนดิจิทัล: คุยกับผู้เชี่ยวชาญหาแนวทางเหมาะสม</title>
    <link>https://cis-india.org/internet-governance/news/e2de2de01e41e1ae1ae23e30e1ae1ae02e49e2de21e39e25e1be23e30e0ae32e0ae19e14e34e08e34e17e31e25-e04e38e22e01e31e1ae1ce39e49e40e0ae35e48e22e27e0ae32e0de2be32e41e19e27e17e32e07e40e2be21e32e30e2ae21</link>
    <description>
        &lt;b&gt;Talk with Sunil Abraham, an expert on the Internet and good governance in the issue of creating a digital identification system.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;What should you think before doing a national database? Transparency should be inversely proportional to the power of the person. The state must provide information as well. Not the only store Database technology and public surveillance are not the same. Otherwise the entire system will crash How important is democracy in making good information systems? &lt;span&gt;Read the interview &lt;/span&gt;&lt;a class="external-link" href="https://prachatai.com/journal/2019/07/83472"&gt;published by Prachatai&lt;/a&gt;&lt;span&gt; on July 18, 2019 below&lt;/span&gt;&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;คุยกับสุนิล อับราฮัม ผู้เชี่ยวชาญเรื่องอินเทอร์เน็ตและธรรมาภิบาลในประเด็นการจัดทำระบบข้อมูลประจำตัวประชาชนแบบดิจิทัล                       ควรคิดอะไรก่อนทำฐานข้อมูลประชาชนระดับชาติ                       ความโปร่งใสควรแปรผกผันกับอำนาจของบุคคล                       รัฐต้องให้ข้อมูลด้วย ไม่ใช่เก็บอย่างเดียว                       เทคโนโลยีฐานข้อมูลกับการสอดส่องประชาชนไม่ใช่เรื่องเดียวกัน                       ไม่เช่นนั้นพังทั้งระบบ                       ประชาธิปไตยสำคัญอย่างไรกับการทำระบบข้อมูลที่ดี&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;หนึ่งในบทสนทนาที่มีในปัจจุบันคือการนำข้อมูลประชาชนขึ้นสู่ระบบดิจิทัล เทคโนโลยีการบริหารจัดการข้อมูลอย่างระบบฐานข้อมูลดิจิทัลไปจนถึงโครงข่ายออนไลน์แบบบลอกเชนทำให้จินตนาการดังกล่าวเป็นรูปเป็นร่างขึ้น&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;แต่เมื่อถอยกลับไปมองภาพใหญ่จะพบว่าเรื่องทางเทคนิคเป็นเพียงหนึ่งเม็ดทรายบนชายหาด ยังมีข้อควรคำนึงถึงเยอะแยะหยุมหยิมไปหมดทั้งในเรื่องกฎหมาย ความพร้อมของผู้บังคับใช้กฎหมาย ภาคธุรกิจและประชาชนที่ต้องคำนึงถึงเรื่องพฤติกรรม บรรทัดฐานของสังคม และคำถามสำคัญที่ว่าระบบดังกล่าวจะถูกใช้ในการเฝ้าระวัง สอดแนมประชาชนหรือไม่ เพราะประเทศเผด็จการที่คนไทยหลายคนยกย่องอย่างจีน ก็ใช้ข้อมูลอัตลักษณ์ประชาชนถึงขั้นคุมความประพฤติกันด้วยระบบคะแนนได้แล้ว&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;แม้ยังไม่เกิดในไทยแต่ก็ไม่ได้แปลว่าเป็นไปไม่ได้ ความกังวลของชาว 14 อำเภอและสามจังหวัดชายแดนใต้เมื่อมีข้อความ SMS จากกองอำนวยการรักษาความมั่นคงภายใน (กอ.รมน.) ให้ไปสแกนใบหน้าเพื่อลงทะเบียนซิมการ์ดตามประกาศของ กสทช. เป็นหนึ่งในภาพสะท้อนจากพื้นที่ที่ความมั่นคงหลอมรวมอยู่ในการใช้ชีวิตประจำวันที่ชัดเจน ปัญหาของการทำระบบนั้นยืนอยู่บนคำถามใหญ่ว่า “ทำอย่างไร” และ “เพื่ออะไร” หากกิจวัตรประจำวันของคนทั้งประความมั่นคงจะกลายเป็นองค์ประกอบในเทศ&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;สุนิล อับบราฮัม ผู้อำนวย (ผอ.) การบริหารจากศูนย์เพื่ออินเทอร์เน็ตและสังคมจากประเทศอินเดีย ให้สัมภาษณ์ประชาไทในเรื่องรูปร่างหน้าตาของระบบพิสูจน์อัตลักษณ์บุคคลดิจิทัลว่าควรเป็นแบบไหน อะไรที่ต้องคำนึงถึงและถามกันบ่อยๆ เมื่อจะออกแบบระบบ การเฝ้าระวังอาชญากรรมและปัญหาความมั่นคงทำได้แค่ไหน และการเป็นประชาธิปไตยเกี่ยวอะไรกับการมีระบบข้อมูลประชาชนดิจิทัลที่ดี&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;ประชาไท: ระบบข้อมูลประชาชนดิจิทัลคืออะไร&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;สุนิล:&lt;/strong&gt; เดิมทีบัตรประชาชนเป็นวัตถุทางกายภาพ ส่วนมากก็เป็นกระดาษและมันก็มีข้อน่าห่วงมากๆ ในเรื่องความปลอดภัย เพราะว่ารัฐและบริษัทเอกชนต่างใช้บัตรประชาชนเพื่อไปถ่ายสำเนา อันนี้ผมได้ยินว่าในบริบทของไทยก็ถือเป็นเรื่องปกติเช่นกัน สิ่งที่คุณต้องการจริงๆ คือวิธีที่จะทำให้ภาครัฐและเอกชนยืนยันตัวตนโดยไม่ต้องเก็บข้อมูลจากคุณมากจนเกินไป&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ในทางอุดมคตินั้นระบบเอกสารประจำตัวที่ดี ควรที่จะทำให้การยืนยันรายละเอียดของคุณอย่างพวกที่อยู่ อายุ สถานะจน-รวย โดยไม่ต้องเก็บข้อมูล (อื่นๆ) ที่ไม่จำเป็นรวมถึงเลขบัตรประชาชนด้วย แม้แต่เลขประจำตัวประชาชนของคุณก็ไม่ควรจะถูกเก็บไปโดยองค์กรอื่นๆ โดยไม่มีความจำเป็น&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ปัจจุบันเรามีทางเลือกสองแบบ มีตัวอย่างในแคนาดา สหราชอาณาจักร หรือแม้แต่ในไทยที่กำลังทำ โครงการระบบพิสูจน์ตัวตนอิเลกทรอนิคส์แห่งชาติ หรือ National Digital ID (NDID) คุณคิดถึงวิธีแก้ปัญหาเรื่องระบบเอกสารประจำตัวในฐานะระบบนิเวศที่จะให้ตัวแสดงในระบบนิเวศยืนยันข้อมูลประจำตัวและเก็บข้อมูลของปัจเจกผ่านระบบการจัดการการยินยอมที่ดี (consent management)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(แต่) ก็มีหลายประเทศที่มีหน่วยงานจัดทำระบบฐานข้อมูลประชาชนแห่งชาติแบบรวมศูนย์ แล้วก็กลายเป็นจุดล้มเหลวจุดเดียว (Single Point of Failure - SPOF) ของระบบในประเทศ นี่จึงเป็นตัวเลือกใหญ่ๆ ที่แต่ละประเทศมี คือจะใช้วิธีจัดการแบบระบบนิเวศที่คิดถึงทุกอย่างแบบเป็นองค์รวม หรือมองว่าประเทศหนึ่งประเทศก็เหมือนกับบริษัทหรือมหาวิทยาลัย อะไรที่ใช้ได้กับบริษัทหรือมหาวิทยาลัยก็ใช้แบบนั้นกับประเทศทั้งประเทศ&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;แต่ละวิธีมีข้อเสียต่างกันอย่างไร&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;ในทางวิทยาศาสตร์คอมพิวเตอร์และวิศวกรรมคอมพิวเตอร์ ผู้เชี่ยวชาญทุกคนจะบอกว่าไม่มีระบบไหนที่ถูกแฮ็กไม่ได้ แต่ระหว่างสองตัวเลือกนี้มีความแตกต่างอย่างมาก ในโมเดลระบบนิเวศจะไม่มีจุดล้มเหลวจุดเดียวและการเจาะระบบนี้ก็มีต้นทุนสูงกว่าระบบแบบรวมศูนย์ แม้แต่การฟื้นฟูและรักษาข้อมูลที่หายไปก็ทำได้ถูกกว่าด้วย แต่ในระบบแบบรวมศูนย์นั้น ทุกคนจะได้รับผลกระทบเมื่อมีการเจาะเข้าไปได้ และส่วนมากการโจมตีจุดที่ล้มเหลวจุดเดียวก็มีต้นทุนน้อยกว่า&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;กระแสโลกที่มีต่อการทำข้อมูลประชาชนดิจิทัลคืออะไร&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;แนวโน้มใหญ่ๆ ของโลกคือมีบางบริษัทที่ขายเทคโนโลยีไบโอเมทริกซ์ (การใช้ข้อมูลทางชีวภาพ เช่น ลักษณะทางกายภาพอย่างม่านตา ลายนิ้วมือ ดีเอ็นเอ ใบหน้าเพื่อตรวจสิทธิหรือแสดงตน) ที่โดยพื้นฐานแล้วเป็นเทคโนโลยีแบบควบคุมจากระยะไกลและไม่ต้องใช้ความยินยอมของเจ้าของข้อมูล เพราะเวลาที่มีการสแกนใบหน้าหรือม่านตาเพื่อยืนยันตัวตนนั้น เจ้าของข้อมูลอาจจะไม่รู้ ผู้ใช้งานอาจจะสแกนจากระยะไกลด้วยกล้องความคมชัดสูง และการเก็บข้อมูลอัตลักษณ์ก็เก็บได้ขณะที่เจ้าของนอนหลับหรือหมดสติ&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;เทคโนโลยีไบโอเมทริกซ์เป็นเทคโนโลยีการเฝ้าระวังที่ดีมากเมื่อรัฐบาลต้องการต่อกรกับอาชญากรรมหรือบังคับใช้กฎหมาย อย่างไรก็ตาม เทคโนโลยีการเฝ้าระวังไม่ใช่เทคโนโลยีข้อมูลประจำตัวที่ดี โชคร้ายที่บริษัทใหญ่ๆ ที่ขายระบบเฝ้าระวังได้เดินทางไปทั่วโลกและบอกกับรัฐบาลต่างๆ ว่าพวกคุณสามารถแก้ปัญหาเรื่องเอกสารข้อมู,และความมั่นคงได้พร้อมกันด้วยเทคโนโลยีเฝ้าระวังซึ่งมันไม่จริง&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ถ้าคุณใช้เทคโนโลยีการเฝ้าระวังมาสร้างระบบข้อมูลประชาชน นั่นหมายความว่าคุณยิ่งไปสร้างความเสี่ยงด้านความมั่นคงเข้าไปอีก เพราะคุณสร้างสิ่งที่เรียกว่า ‘ไหน้ำผึ้ง’ หมายถึงว่ามีจุดๆ หนึ่งที่เก็บข้อมูลลายนิ้วมือ ใบหน้าหรือม่านตาของทุกๆ คน แล้วถ้าระบบนั้นมีจุดที่ล้มเหลวขึ้นมาเพียงจุดเดียว ลองนึกถึงระบบอินเทอร์เน็ตที่เก็บพาสเวิร์ดของทุกคนเอาไว้ในเซิฟเวอร์เดียวกัน มันก็เป็นความเสี่ยงนั้น&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;เทคโนโลยีไบโอเมทริกซ์นั้นควรใช้ในระบบแบบไม่รวมศูนย์ คุณสามารถเก็บข้อมูลทางชีวภาพจากประชาชนได้ แต่ควรเก็บมันเอาไว้ในชิปสมาร์ทการ์ดของแต่ละคน อย่างระบบสแกนใบหน้าของไอโฟนที่ไม่มีเซิฟเวอร์เก็บข้อมูลใบหน้า แต่อาศัยพื้นที่บนโทรศัพท์มือถือของผู้ใช้งานให้เก็บข้อมูลเหล่านั้นเอง&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;บางประเทศมีสมาร์ทการ์ดที่มีแม้กระทั่งเครื่องอ่านลายนิ้วมือบนบัตร ที่คุณต้องทำก็คือใส่สมาร์ทการ์ดเข้าไปในเครื่องอ่าน จากนั้นคุณก็วางนิ้วมือลงบนสมาร์ทการ์ดโดยไม่ต้องเอานิ้วไปแปะที่อุปกรณ์อื่นของรัฐหรือเอกชน นั่นเป็นวิธีการใช้งานไบโอเมทริกซ์ที่ถูกต้องเพราะคุณใช้โบโอเมทริกซ์แบบที่ไม่อิงอยู่กับการเฝ้าระวัง&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;แปลว่าแนวโน้มระบบข้อมูลประชาชนของรัฐส่วนใหญ่อยู่กับฐานคิดการเฝ้าระวังใช่ไหม&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;ใช่แล้ว ความมั่นคงแห่งชาติและการเฝ้าระวังถูกจัดเป็นความสำคัญอันดับต้นๆ ซึ่งนั่นไม่ใช่แนวทางในการออกแบบระบบฐานข้อมูลประจำตัวประชาชนแบบ e-governance (ธรรมาภิบาลอิเล็คโทรนิกส์) การเฝ้าระวังนั้นสำคัญมากสำหรับสังคม แต่มันก็เหมือนเกลือในอาหาร คุณไม่สามารถกินอาหารได้โดยไม่มีเกลืออยู่ในนั้นนิดหน่อย คุณไม่สามารถมีประเทศที่ปลอดภัยหากไม่มีการเฝ้าระวัง แต่ถ้าคุณตัดสินใจตักเกลือห้าช้อนชาใส่ลงไปในอาหารเมื่อไหร่ อาหารก็เป็นพิษ เรื่องการเฝ้าระวังก็เช่นกัน มันจำเป็นในปริมาณน้อย แต่จะมีผลย้อนกลับหากมีมากเกินไป&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;แล้วแนวทางที่ดีที่สุดควรเป็นแบบไหน&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;ควรใช้ระบบและมาตรฐานแบบเปิด (open source and open standard) เพราะคุณจะสามารถพิสูจน์และตรวจสอบระบบได้ ถ้าคุณตรวจสอบหรือพิสูจน์ไม่ได้ นั่นหมายความว่าคุณจะไม่รู้ว่ามันทำงานอย่างไร ส่วนต่อไปคือข้อมูลที่ถูกขอและส่งต่อในระบบนิเวศเมื่อทำธุรกรรมจะต้องมีจำนวนน้อยที่สุด&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;อีกสิ่งที่จำเป็นคือ คุณต้องมี Human in the Loop (ความสัมพันธ์หรือปฏิสัมพันธ์ของมนุษย์ในระบบนั้น) หมายความว่า คุณควรรู้ว่าในขั้นตอนนั้นๆ มีเจ้าหน้าที่รัฐหรือพนักงานเอกชนคนไหนเป็นคนรับผิดชอบ  และถ้ามีอะไรผิดพลาดคุณควรจะหาคนรับผิดรับชอบได้&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ความรับผิดรับชอบนั้นแยกได้ว่า หนึ่ง เห็นตัวคนที่รับผิดชอบ สื่อมวลชนสามารถชี้นิ้วไปได้และบอกว่าคนนี้รับผิดชอบกับความผิดพลาดนั้น สอง การเป็นผู้จ่ายค่าปรับ ส่วนนี้สำคัญกับภาคเอกชน และสุดท้ายคือคนที่ต้องติดคุกหากมีเรื่องร้ายแรงเกิดขึ้น เช่นสิทธิมนุษยชนของบางคนได้รับผลกระทบ ดังนั้น เมื่อคุณจะออกแบบระบบฐานข้อมูลประจำตัว คุณต้องถามว่า ‘ใครเป็น Human in the loop’ นั่นเป็นกุญแจหลักของการออกแบบ&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;หลักการต่อไปของระบบข้อมูลประชาชนที่ดีคือต้องกระจายจากศูนย์กลาง ไม่ควรมีจุดล้มเหลวจุดใหญ่จุดเดียว การจัดการข้อมูลแบบระบบนิเวศนั้นดีกว่าการรวมศูนย์ นอกจากนั้นระบบควรจะรับมือและฟื้นตัวจากเหตุร้ายแรงที่สุดได้ ในระหว่างที่คุณออกแบบระบบก็ควรตั้งคำถามไปพลางว่า ถ้าระบบโดนแฮ็กจะทำอย่างไร หรือถ้าอาชญากรเอาระบบนี้ไปใช้ล่ะ คุณจำเป็นต้องคำนึงถึงความเป็นไปได้ที่ร้ายแรงที่สุดและต้องออกแบบระบบมารับมือมัน&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;แล้วมองในแง่สังคม คนทั่วไป คุณกังวลเรื่องอะไรบ้าง&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;ปัญหาหลักตอนนี้คือ แนวคิดที่รายล้อมระบบข้อมูลประชาชนดิจิทัลคือการย้ำให้พลเมืองต้องโปร่งใสกับรัฐ พวกเขา (รัฐ) ต้องการให้พลเมืองส่งข้อมูลทุกอย่างให้กับรัฐ แต่ว่ารัฐไม่ให้ข้อมูลใดๆ กับพลเมือง ในระบบข้อมูลประชาชนที่ดี รัฐควรจะมีความโปร่งใสกับพลเมือง&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ผมจะยกตัวอย่างให้ฟัง สมมติว่าผมเป็นพนักงานรัฐที่ทุจริต ผมจะเขียนลงไปในบันทึกว่าคุณมาหาผมที่ออฟฟิศในวันนี้ นี่คือเลขประจำตัวประชาชนของคุณที่ขอกู้เงิน หรือไม่ก็ได้รับเงินอุดหนุนจำนวน 2,000 บาท ผมก็สามารถเอาเงิน 2,000 บาทเข้ากระเป๋าผมแบบไม่มีใครพิสูจน์ได้ และคุณก็ปฏิเสธไม่ได้ด้วย เพราะว่าเลขประจำตัวของคุณอยู่ในบันทึกของรัฐ แต่ถ้าคุณใช้มันให้ดี เราจะมีเครื่องอ่านสมาร์ทการ์ดที่พลเมืองจะใส่บัตรและกดรหัส หลังจากคุณดึงบัตรออกเจ้าหน้าที่ก็จะใส่สมาร์ทการ์ดของเขาเข้าไปและกดรหัส นั่นจะทำให้มีบันทึกในระบบอิเล็กโทรนิกส์และถูกเซ็นโดยเจ้าหน้าที่รัฐและพลเมือง จะไม่มีใครปฏิเสธได้แล้วว่ามีการพบกันจริงๆ ในระบบข้อมูลประจำตัวที่ดีนั้น ทั้งคู่จะต้องแสดงตัวตน แต่ในระบบที่ไม่ดีจะมีแต่เจ้าหน้าที่รัฐที่ถามหาหลักฐานประจำตัวและคุณจะไม่มีการบันทึกว่าเกิดอะไรขึ้นบ้าง&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ความเป็นส่วนตัวและการคุ้มครองนั้นควรมีสัดส่วนแปรผันกับอำนาจ ความโปร่งใสและการกำกับควบคุมควรมีสัดส่วนโดยตรงกับอำนาจ คนที่มีอำนาจหรือคนรวยต้องมีความโปร่งใสมากกว่าคนอื่นและมีความเป็นส่วนตัวน้อยกว่าคนอื่น คนที่ไม่มีอำนาจหรือคนเปราะบางควรจะมีความเป็นส่วนตัวมากขึ้น&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ถ้าคุณดูนโยบายด้านฐานข้อมูลแบบเปิด (โอเพ่นดาต้า) หรือกฎหมายเสรีภาพด้านข้อมูลข่าวสารจะพบว่าข้อมูลส่วนบุคคลเป็นข้อยกเว้นในกฎหมายเหล่านั้น ข้อมูลรัฐที่ไม่เป็นส่วนตัวเท่านั้นที่สามารถถูกแบ่งปันกันได้ในโอเพ่นดาต้า แต่ถ้าคุณไปดูกฎหมายความเป็นส่วนตัวก็จะพบว่ามีข้อยกเว้นในเรื่องประโยชน์ต่อสาธารณะ นั่นหมายความว่า ถ้าคุณเป็นเจ้าหน้าที่รัฐหรือนักการเมืองคนสำคัญ สิ่งที่คุณคุยในห้องนอนก็อาจสำคัญกับประเทศทั้งประเทศ นั่นหมายความว่าคุณไม่มีความเป็นส่วนตัวในการพูดคุยเรื่องลับ&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ความเป็นส่วนตัวนั้นเป็นข้อยกเว้น แต่ผลประโยชน์สาธารณะมันเป็นข้อยกเว้นของข้อยกเว้นอีกที สมมติว่านายกฯ มีปัญหาสุขภาพร้ายแรงที่ทำให้เขาหรือเธอไม่เหมาะที่จะดำรงตำแหน่งอีกต่อไป ข้อมูลส่วนตัวนั้นก็เป็นข้อยกเว้นของข้อยกเว้น ถ้าการได้รู้ว่านายกฯ ป่วยหนักเป็นประโยชน์ต่อสาธารณะมันก็ควรถูกเปิดเผย การลองทำบททดสอบด้านผลประโยชน์สาธารณะน่าจะช่วยเรื่องการจัดการแกนสมมาตรเชิงอำนาจระหว่างกฎหมายสองชุด&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ในการช่วยเหลือคนจน คุณควรมีกฎหมายความโปร่งใสและนโยบายโอเพ่นดาต้าที่ดี เพื่อคุ้มครองคนจนและคนเปราะบาง คุณต้องมีกฎหมายความเป็นส่วนตัว และถ้าคุณมีการทดสอบเรื่องผลประโยชน์สาธารณะในกฎหมายทั้งสองชุด กฎหมายเหล่านั้นก็จะไม่ถูกใช้ขูดรีดคนจน&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;ความมั่นคงจะอยู่ร่วมกับเสรีภาพและความเป็นส่วนตัวได้อย่างไร&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;กฎหมายข้อมูลประจำตัวดิจิทัลจะต้องมีสอดรับกับกฎหมายความเป็นส่วนตัวและนโยบายโอเพ่นดาต้า แต่ปัญหาก็คือกฎหมายความเป็นส่วนตัวยังเป็นเรื่องใหม่มากๆ ในภูมิภาคนี้ ไทยเพิ่งผ่าน พ.ร.บ. คุ้มครองข้อมูลส่วนบุคคล ที่อินเดียยังไม่มีในระดับชาติ ก็ยังคงมีงานที่ต้องทำอยู่ ศาลต้องทำหน้าที่หาคำนิยาม หน่วยงานกำกับดูแลต้องมีแนวทางกำกับที่จำเพาะมากๆ ภาคอุตสาหกรรมต้องมีแนวทางกำกับตัวเองและแนวปฏิบัติที่ดีที่สุด ภาคประชาสังคมเองก็ต้องช่วยภาคส่วนอื่นๆ ด้วยการถามคำถามหนักๆ&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;มันต้องใช้เวลา อย่างยุโรปก็มีเส้นทางการมีกฎหมายคุ้มครองข้อมูลยาวนานถึง 35 ปี นั่นเป็นเหตุผลที่ยุโรปมีการคุ้มครองที่ดีกว่า ในภูมิภาคของพวกเราก็จะใช้เวลาต่อสู้ถึง 35 ปีเช่นกัน ดังนั้น ประชาสังคมจะต้องเตรียมตัวในการต่อสู้เป็นเวลา 35 ปี และหลังจากนั้น ลูกหรือลูกของลูกเราจะเห็นระบบนิเวศข้อมูลประชาชนที่ปลอดภัยกว่านี้&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;รัฐบาลควรทำอะไรบ้าง&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;การผ่านกฎหมายอย่างเดียวนั้นไม่เพียงพอ ที่ (รัฐบาล) ทำในไทยคือแค่ผ่านกฎหมาย ตอนนี้คุณต้องสร้างคณะกรรมการที่เป็นอิสระ มีงบประมาณมากพอที่จะจ้างวิศวกรและนักกฎมายที่ดีที่สุด คณะกรรมการควรเริ่มบังคับใช้ข้อบังคับอย่างช้าๆ ศาลเองก็ควรพัฒนาองค์ความรู้ ผู้พิพากษาจะต้องเรียนรู้ว่าเกิดอะไรขึ้นบ้างในประเทศอื่นๆ ระบบกฎหมายต้องเตรียมพร้อมกับข้อกังวลใหม่ๆ&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;เทคโนโลยีช่วยได้แค่ไหน&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;เทคโนโลยีเป็นแค่ส่วนหนึ่งของการแก้ปัญหา คุณยังต้องกังวลเรื่องกฎหมายและบรรทัดฐานทางสังคม อะไรที่คนธรรมดาเขาทำกัน ถ้าทุกคนยังคงยินดีกับการส่งสำเนาบัตรประชาชน คุณก็ต้องไปเปลี่ยนมัน รัฐบาลมีประสบการณ์มากกับการยกระดับบรรทัดฐานทางสังคม&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;รัฐบาลต้องใช้อำนาจที่มีในการเปลี่ยนแนวปฏิบัติ เรื่องแนวทางการคุ้มครองความเป็นส่วนตัวก็เหมือนการสูบบุหรี่ พวกนักสูบส่วนมากก็รู้อยู่แล้วว่าการสูบบุหรี่นั้นทำให้เกิดมะเร็งและปัญหาอื่นๆ แต่ก็จะยังสูบต่อไปจนกว่าหมอจะบอกว่าเป็นมะเร็ง รัฐบาลก็ต้องทำให้พลเมืองเกิดความกลัวในสิ่งที่จะเกิดขึ้นเพื่อให้ประชาชนเลิกไม่เอาใจใส่เรื่องข้อมูลส่วนบุคคล&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ส่วนสุดท้ายคือตลาด บรรษัทก็ต้องเริ่มสร้างนวัตกรรม เช่น ธนาคารควรออกมาพูดได้ว่าระบบของเราดีกว่าที่อื่น เราไม่ใช้ไบโอเมทริกซ์ เป็นต้น กฎหมายต้องทำให้เกิดการแข่งขันระหว่างบรรษัทในเรื่องความปลอดภัย ความเป็นส่วนตัว เมื่อเราเห็นบรรทัดฐาน กฎหมาย เทคโนโลยี และการแข่งขันทางเทคโนโลยี วันนั้นเราจะเริ่มเห็นทางออก ผมถึงบอกว่ามันจะใช้เวลา 30-40 ปี ไม่ก็นานกว่านั้น&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;ระบบข้อมูลประจำตัวดิจิทัลที่ดีเกี่ยวอะไรกับประเทศเป็นประชาธิปไตยไหม&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;ผู้คนถามคำถามยากๆ หลายคำถามในระบอบประชาธิปไตย และนั่นเป็นประโยชน์ แต่สิ่งที่เราต้องการจริงๆ คือประชาธิปไตยที่ปกครองโดยรัฐธรรมนูญ (Constitutional democracy) เพราะคุณไม่สามารถเดินไปถามคนทุกคนเพื่อหามติต่อคำถามทางเทคนิค&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;คุณต้องมีการอภิปรายสาธารณะที่โปร่งใสเยอะๆ แต่คุณไม่สามารถตัดสินใจกันด้วยการโหวต การไปถามว่า ‘มีกี่คนอยากใช้สแกนลายนิ้วมือ มีกี่คนอยากใช้สแกนใบหน้า’ ไม่ใช่วิธีออกแบบระบบข้อมูลประจำตัวดิจิทัล มันจะต้องวางอยู่บนหลักของรัฐธรรมนูญบางประการเช่นความถูกต้องตามกฎหมาย ความจำเป็น ความได้สัดส่วน หลังจากนั้นคุณจะต้องมีแนวทางที่เสนอโดยวิศวกรและนักกฎหมาย จากนั้นจึงให้มีการถกเถียงและอภิปราย&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;คุณตัดสินโดยอิงเสียงข้างมากไม่ได้เพียงเพราะคนส่วนมากบอกว่าพวกเขารู้สึกว่าการสแกนใบหน้ามันง่ายมาก คุณก็ไม่สามารถบอกว่าจะนำการสแกนใบหน้าไปใช้กับทุกอย่างเพียงเพราะมันปลดล็อกไอโฟนง่ายดี เพราะในวันพรุ่งนี้เทคโนโลยีเดียวกันอาจถูกนำไปใช้เพื่อสลายการชุมนุมก็ได้ แม้ทุกคนจะรักหลงการสแกนใบหน้าในประชาธิปไตยของคุณ แต่รัฐธรรมนูญยังคงต้องปฏิเสธมันและบอกว่ามันไม่จำเป็น ไม่ได้สัดส่วน มันควรถูกแบน หรือไม่ก็ใช้ในวัตถุประสงค์ที่จำเพาะ&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;ช่วยอธิบายว่าทำไมการเฝ้าระวังอาจเป็นการทำให้คนหลบเข้าไปอยู่ในมุมมืดมากขึ้น&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;มันเป็นผลที่เกิดขึ้นโดยไม่ตั้งใจ อย่างถ้าคุณไปบล็อกเนื้อหาที่คนชอบมากๆ คนก็อาจจะหันไปใช้ TOR หรือ VPN (วิธีการเข้าถึงเนื้อหาที่ถูกบล็อก) ซึ่งนั่นไม่ใช่ความตั้งใจของคุณ ถ้าคุณไม่พัฒนาระบบข้อมูลประชาชนที่ดี ประชาชนก็จะเริ่มทำตัวเหมือนอาชญากร แต่พวกเขาไม่ใช่อาชญากร เพียงแค่เขาไม่ชอบการออกแบบระบบเท่านั้น คุณไม่สามารถบังคับให้คนทำพฤติกรรมแบบนั้นหรือแบบนี้ได้ ดังนั้นการเป็นประชาธิปไตยจึงสำคัญ ในระหว่างที่คุณพัฒนาเทคโนโลยีคุณก็ควรถามพวกเขา (ผู้ใช้) ไปด้วยว่ามันใช้ได้หรือไม่ ทำให้เกิดการอภิปรายขึ้น&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/e2de2de01e41e1ae1ae23e30e1ae1ae02e49e2de21e39e25e1be23e30e0ae32e0ae19e14e34e08e34e17e31e25-e04e38e22e01e31e1ae1ce39e49e40e0ae35e48e22e27e0ae32e0de2be32e41e19e27e17e32e07e40e2be21e32e30e2ae21'&gt;https://cis-india.org/internet-governance/news/e2de2de01e41e1ae1ae23e30e1ae1ae02e49e2de21e39e25e1be23e30e0ae32e0ae19e14e34e08e34e17e31e25-e04e38e22e01e31e1ae1ce39e49e40e0ae35e48e22e27e0ae32e0de2be32e41e19e27e17e32e07e40e2be21e32e30e2ae21&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-21T14:32:25Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/bis-litd-17-meeting">
    <title>BIS LITD 17 meeting</title>
    <link>https://cis-india.org/internet-governance/news/bis-litd-17-meeting</link>
    <description>
        &lt;b&gt;On July 3, 2019, Gurshabad Grover attended the sixteenth meeting of the Information Systems Security and Biometrics Section Committee (LITD17) at the Bureau of Indian Standards (BIS) in New Delhi.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In a previous meeting, a panel was formed to review two biometric standards: ISO/ IEC 24745 'Security Techniques - Biometric Information Protection' (2011), and ISO/IEC 19792 'Security techniques - Security evaluation of biometrics' (2009). Elonnai Hickok, Karan Saini and Gurshabad Grover had reviewed the documents and sent comments to BIS in December 2018 and January 2019 respectively. The Centre for Internet &amp;amp; Society (CIS) had also shared a document that compared the security guidelines in the standards to the provisions of the draft data protection bill.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The committee discussed whether the aforementioned standards should be adopted as Indian standards by BIS. A decision will be taken on the matter after future discussions that CIS will participate in.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Members updated the committee on their participation at the ISO/IEC. Iupdated the committee on the progress of the study period on the impact of machine learning on privacy, which I am a co-rapporteur for in the identity management and privacy group working group at ISO/IEC IT Security Techniques committee. We also planned our participation at the next ISO/IEC SC 27 meeting, which is in October.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/bis-litd-17-meeting'&gt;https://cis-india.org/internet-governance/news/bis-litd-17-meeting&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-07-21T13:58:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-wire-mira-swaminathan-and-shweta-reddy-july-20-2019-old-isnt-always-gold-face-app-and-its-privacy-policies">
    <title>Old Isn't Always Gold: FaceApp and Its Privacy Policies</title>
    <link>https://cis-india.org/internet-governance/blog/the-wire-mira-swaminathan-and-shweta-reddy-july-20-2019-old-isnt-always-gold-face-app-and-its-privacy-policies</link>
    <description>
        &lt;b&gt;Leaving aside the Red Scare for a moment, FaceApp's own rebuttal of privacy worries are highly problematic in nature.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;The article by Mira Swaminathan and Shweta Reddy was published in &lt;a class="external-link" href="https://thewire.in/tech/old-isnt-always-gold-faceapp-privacy-data-policies"&gt;the Wire&lt;/a&gt; on July 20, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify;" /&gt;
&lt;p style="text-align: justify;"&gt;If you, much like a large number of celebrities, have spammed your followers with the images of ‘how you may look in your old age’,&amp;nbsp;&lt;a href="https://yourstory.com/2019/07/faceapp-photo-filter-virat-kohli-arjun-kapoor-jonas-brothers"&gt;you have successfully been a part of the FaceApp fad &lt;/a&gt;that has gone viral this week.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The problem with the FaceApp trend isn’t that it has penetrated most social circles, but rather, the fact that it has gone viral with minimal scrutiny&amp;nbsp;&lt;a href="https://www.huffingtonpost.in/entry/faceapp-privacy-issues_n_5d2f3ba7e4b02fd71dde0bc2"&gt;of its vaguely worded privacy policy guidelines.&lt;/a&gt; We click ‘I agree’ without understanding that our so called ‘explicit consent’ gives the app permission to use our likeness, name and username, for any purpose, without our knowledge and consent,&amp;nbsp;&lt;a href="https://edition.cnn.com/2019/07/17/tech/faceapp-privacy-concerns/index.html"&gt;even after we delete the app&lt;/a&gt;. FaceApp&amp;nbsp;&lt;a href="https://www.hindustantimes.com/tech/faceapp-is-trending-again-all-you-need-to-know-about-the-viral-ai-photo-editing-app/story-5VQurpSMSogKwiqX03GbNK.html"&gt;is currently the most downloaded free app on the Apple Store&lt;/a&gt; due to a large number of people downloading the app to ‘turn their old selfies grey’.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;There are many things that the app could do. It could process the images on your device,&amp;nbsp;&lt;a href="https://www.forbes.com/sites/thomasbrewster/2019/07/17/faceapp-is-the-russian-face-aging-app-a-danger-to-your-privacy/#3a8cbcb32755"&gt;rather than take submitted photos to an outside server&lt;/a&gt;.&amp;nbsp; It could also upload your photos to the cloud without making it clear to you that processing is not taking place locally on their device.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Further, if you have an Apple product, the iOS app appears to be overriding your settings even if you have denied access to their camera roll. People have reported that they could still select and upload a photo despite the app not having permission to access their photos.&amp;nbsp;&lt;a href="https://techcrunch.com/2019/07/16/ai-photo-editor-faceapp-goes-viral-again-on-ios-raises-questions-about-photo-library-access-and-clo/"&gt;This ‘allowed behaviour’ in iOS&lt;/a&gt; is quite concerning, especially when we have apps with loosely worded terms and conditions.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;FaceApp responded&amp;nbsp;&lt;a href="https://techcrunch.com/2019/07/17/faceapp-responds-to-privacy-concerns/"&gt;to these privacy concerns by issuing a statement with a list of defences.&lt;/a&gt; The statement clarified that FaceApp performs most of the photo processing in the cloud, that they only upload a photo selected by a user for editing and also confirmed that they never transfer any other images from the phone to the cloud. However, even in their clarificatory statement, they stated that they ‘might’ store an uploaded photo in the cloud and explained that the main reason for that is “performance and traffic”. They also stated that ‘most’ images are deleted from their servers within 48 hours from the upload date.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Further, the statement ends by saying that “all pictures from the gallery are uploaded to our servers after a user grants access to the photos”. This is highly problematic.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We have explained the concerns arising out of the privacy policy with reference to the global gold standards: the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, APEC Privacy Framework, Report of the Group of Experts on Privacy chaired by Justice A.P. Shah and the General Data Protection Regulation in the table below:&lt;/p&gt;
&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Privacy Domain&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm"&gt;OECD Guidelines &lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.apec.org/Publications/2005/12/APEC-Privacy-Framework"&gt;APEC Privacy Framework &lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf"&gt;Report of the Group of Experts on Privacy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&amp;amp;uri=CELEX%3A32016R0679"&gt;General Data Protection Regulation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://faceapp.com/privacy"&gt;FaceApp Privacy Policy&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transparency&lt;/td&gt;
&lt;td&gt;There should be a general policy of openness about developments, practices and policies with respect to personal data.&lt;/td&gt;
&lt;td&gt;Personal information controllers should provide clear and easily accessible statements about their practices and policies with respect to personal data.&lt;/td&gt;
&lt;td&gt;A data controller shall give&amp;nbsp;a&amp;nbsp;notice that is understood simply of its information practices to all individuals, in clear and concise language, before any personal information is collected from them.&lt;/td&gt;
&lt;td&gt;Transparency:
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The controller shall take appropriate measures to provide information relating to processing to the data subject in a concise, transparent, intelligible and easily accessible form, using clear and plain language.&lt;/p&gt;
&lt;p&gt;Article 29 working party guidelines on Transparency:&lt;/p&gt;
&lt;p&gt;The information should be concrete and definitive, it should not be phrased in abstract or ambivalent terms or leave room for different interpretations.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;p&gt;“We may use your personal data to develop new services” (as it is unclear what the services are or how the data will help develop them);&lt;/p&gt;
&lt;/td&gt;
&lt;td&gt;Information we collect
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“When you visit the Service, we may use cookies and similar technologies”……. provide features to you.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;We may ask advertisers or other partners to serve ads or services to your devices, which may use cookies or similar technologies placed by us or the third party.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“We may also collect similar information from emails sent to our Users..”&lt;/p&gt;
&lt;p&gt;Sharing your information&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“We may share User Content and your information with businesses…”&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“We also may share your information as well as information from tools like cookies, log files..”&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“We may also combine your information with other information..”&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: justify;" colspan="6"&gt;A simple reading of the guidelines in comparison with the privacy policy of FaceApp can help us understand that the terms used by the latter are ambiguous and vague. The possibility of a ‘may not’ can have a huge impact on the privacy concerns of the user.
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The entire point of ‘transparency’ in a privacy policy is for the user to understand the extent of processing undertaken by the organisation and then have the choice to provide consent. Vague phrases do not adequately provide a clear indication of the extent of processing of personal data of the individual.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privacy Domain&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm"&gt;OECD Guidelines &lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.apec.org/Publications/2005/12/APEC-Privacy-Framework"&gt;APEC Privacy Framework &lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf"&gt;Report of the Group of Experts on Privacy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&amp;amp;uri=CELEX%3A32016R0679"&gt;General Data Protection Regulation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://faceapp.com/privacy"&gt;FaceApp Privacy Policy&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security Safeguards&lt;/td&gt;
&lt;td&gt;Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Personal information controllers should protect personal information that they hold with appropriate safeguards against risks, such as loss or unauthorised access to personal information or unauthorised destruction, use, modification or disclosure of information or other misuses.&lt;/td&gt;
&lt;td style="text-align: justify;"&gt;A data controller shall secure personal information that they have either collected or have in their custody by reasonable security safeguards against loss, unauthorised access, destruction, use, processing, storage, modification, deanonymization, unauthorised disclosure or other reasonably foreseeable risks&lt;/td&gt;
&lt;td&gt;The controller and processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk.&lt;/td&gt;
&lt;td&gt;How we store your information
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;“We use commercially reasonable safeguards to help keep the information collected through the Service secure and take reasonable steps… However, FaceApp cannot ensure the security of any information you transmit to FaceApp or guarantee that information on the Service may not be accessed, disclosed, altered, or destroyed.”&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p style="text-align: justify;"&gt;The obligation of implementing reasonable security measures to prevent unauthorised access and misuse of personal data is placed on the organisations processing such data. FaceApp’s privacy policy assures that reasonable security measures according to commercially accepted standards have been implemented. Despite such assurances, FaceApp’s waiver of the liability by stating that it cannot ensure the security of the information against it being accessed, disclosed, altered or destroyed itself says that the policy is faltered in nature.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The privacy concerns and the issue of transparency (or the lack thereof) in FaceApp are not isolated. After all, as a&amp;nbsp;&lt;a href="https://www.buzzfeednews.com/article/daveyalba/what-happens-when-you-upload-faceapp-photos" rel="noopener" target="_blank"&gt;&lt;em&gt;Buzzfeed&lt;/em&gt; analysis of the app noted&lt;/a&gt;, while there appeared to be no data going back to Russia, this could change at any time due to its overly broad privacy policy.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The business model of most mobile applications being developed currently relies heavily on personal data collection of the user. The users’ awareness regarding the type of information accessed based on the permissions granted to the mobile application is questionable.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;In May 2018,&amp;nbsp;&lt;a href="https://www.symantec.com/blogs/threat-intelligence/mobile-privacy-apps"&gt;Symantec tested&lt;/a&gt; the top 100 free Android and iOS apps with the primary aim of identifying cases where the apps were requesting ‘excessive’ access to information of the user in relation to the functions being performed. The study identified that 89% of Android apps and 39% of the iOS app request for what can be classified as ‘risky’ permissions, which the study defines as permissions where the app requests data or resources which involve the user’s private information, or, could potentially affect the user’s locally stored data or the operation of other apps.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Requesting risky permissions may not on its own be objectionable, provided clear and transparent information regarding the processing, which takes place upon granting permission, is provided to the individuals in the form of a clear and concise privacy notice. The study concluded that 4% of the Android apps and 3% of the iOS apps seeking risky permissions didn’t even have a privacy policy.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The lack of clarity with respect to potentially sensitive user data being siphoned off by mobile applications became even more apparent with the case of a&amp;nbsp;&lt;a href="https://www.huffingtonpost.in/entry/fintech-apps-privacy-snooping-credit-vidya_in_5d1cbc34e4b082e55373370a?guccounter=1"&gt;Hyderabad based fintech company&lt;/a&gt; that gained access to sensitive user data by embedding a backdoor inside popular apps.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;In the case of the Hyderabad-based fintech company, the user data which was affected included GPS locations, business SMS text messages from e-commerce websites and banks, personal contacts, etc. This data was used to power the company’s self-learning algorithms which helped organisations determine the creditworthiness of loan applicants. It is pertinent to note that even when apps have privacy policies,&amp;nbsp;&lt;a href="http://snip.ly/2dfaj0#http://www.cuts-ccier.org/cdpp/pdf/survey_analysis-dataprivacy.pdf"&gt;users can still find it difficult to navigate&lt;/a&gt; through the long content-heavy documents.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The&amp;nbsp;&lt;em&gt;New York Times&lt;/em&gt;, as part of its&amp;nbsp;&lt;a href="https://www.nytimes.com/interactive/2019/06/12/opinion/facebook-google-privacy-policies.html"&gt;Privacy Project&lt;/a&gt;,&amp;nbsp;analysed the length and readability of privacy policies of around 150 popular websites and apps. It was concluded that the vast majority of the privacy policies that were analysed exceeded the college reading level. Usage of vague language like “adequate performance” and “legitimate interest” and wide interpretation of such phrases allows organisations to use data in extensive ways while providing limited clarity on the processing activity to the individuals.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The Data Protection Authorities operating under the General Data Protection Regulation are paying close attention to openness and transparency of processing activities by organisations.&amp;nbsp;&lt;a href="https://www.cnil.fr/en/cnils-restricted-committee-imposes-financial-penalty-50-million-euros-against-google-llc"&gt;The French Data Protection Authority&lt;/a&gt; fined Google for violating their obligations of transparency and information. The UK’s Information Commissioner’s office issued an&amp;nbsp;&lt;a href="https://ico.org.uk/media/action-weve-taken/enforcement-notices/2260123/aggregate-iq-en-20181024.pdf"&gt;enforcement notice&lt;/a&gt; to a Canadian data analytics firm for failing to provide information in a transparent manner to the data subject.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Thus, in the age of digital transformation, the unwelcome panic caused by FaceApp should be channelled towards a broader discussion on the information paradox currently existing between individuals and organisations. Organisations need to stop viewing ambiguous and opaque privacy policies as a get-out-of-jail-free card. On the contrary, a clear and concise privacy policy outlining the details related to processing activity in simple language can go a long way in gaining consumer trust.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The next time an “AI-based Selfie App” goes viral, let’s take a step back and analyse how it makes use of user-provided data and information both over and under the hood, since if data is the new gold, we can easily say that we’re in the midst of a gold rush.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-wire-mira-swaminathan-and-shweta-reddy-july-20-2019-old-isnt-always-gold-face-app-and-its-privacy-policies'&gt;https://cis-india.org/internet-governance/blog/the-wire-mira-swaminathan-and-shweta-reddy-july-20-2019-old-isnt-always-gold-face-app-and-its-privacy-policies&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Mira Swaminathan and Shweta Reddy</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-08-09T10:12:11Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier">
    <title>Roundtable Discussion on “The Future of AI Policy in India” @ ICRIER</title>
    <link>https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier</link>
    <description>
        &lt;b&gt;Radhika Radhakrishnan, attended a Roundtable Discussion on “The Future of AI Policy in India” organized by the Indian Council for Research on International Economic Relations (ICRIER) in New Delhi on July 1, 2019,  to arrive at actionable recommendations for promotion of AI in India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika's inputs primarily focused on - capacity and skilling for AI adoption in India, sectoral opportunities for the adoption of AI, regulation of explanations for AI, fairness and bias in AI models, and actionable recommendations for government priorites for AI policies in India.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concept Note&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;India’s Artificial Intelligence moment is truly here and now. At a time when a diverse range of applications based on AI are being developed, pushing its frontier further into uncharted realms of business and society, Indian policy makers are contemplating not just AI’s potential for growth and social transformation, but also its proclivity to create divides and inequality. Our study attempts to understand the impacts of AI and trace the pathways to realizing it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI’s transformational potential stems from its ability to lend itself to a diverse range of applications across a range of sectors. One can witness AI based applications in traditional spheres of manufacturing, which are transforming quality control, production lines, and supply chain management, and in services, which are creating personalized product offerings and high-quality customer engagement. AI applications are also common in sectors such as agriculture that have taken a back seat in technological innovations in the post-industrial world. AI also demonstrates potential for impacting developmental challenges by responding to societies’ immediate demand for healthcare, education and expanding access to finance and banking.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The consequences of AI diffusion stem from AI’s pervasiveness across society, its ability to trigger innovation, and its tendencies to undergo transformation and evolution. These are typical characteristics of a class of technologies that can be found across history, the emergence and diffusion of which have enabled the wealth of nations. These are called General Purpose Technologies (GPT). Technologies such as steam engine, electricity, computers, semi-conductors, and more recently the Internet, can all be conceived as belonging to the GPT class of technologies. Our study is based on the understanding that the implications of AI can be best understood by viewing AI as a GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Historically, the economic impacts of GPTs have not been immediate but follow after its diffusion across the economy, i.e. over a period of time. There are two reasons that explain this phenomenon: firstly, in early phases of technology diffusion, an economy diverts part of its resources from productive activities to costly activities aimed at enabling the GPT. For instance, organizations adopting computers must also invest in training employees or hire computer scientists, re-arrange production activities or organizational structures to accommodate computer driven work-flows, all of which are costly economic activities. Secondly, it is only after the GPT is diffused and widely used in the economy that the statistics measuring GDP start counting and fully measuring the GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Empirical research on GPTs such as AI, including ours, means confronting the challenge of measurement. Estimates on the economic impact of AI are bound to be imprecise because data on AI’s adoption is not available or adequately reflected in the data used to compute economic growth, at least not yet. Measuring the economic impact of AI is also difficult because of the magnitude of indirect effects on productivity that GPTs trigger. It is not therefore uncommon that studies on GPTs, while attempting to estimate their economic impacts, also engage in in-depth case studies and historical analysis of its impacts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Our findings show unambiguous and positive impacts of AI on firm level productivity across sectors, although there is variation in the magnitude of positive impacts across sectors. We complement our findings with case studies that cover different firms that are developing AI based applications across a range of sectors to understand the underlying firm-level capabilities that drive innovations in AI based applications. Our study leads us towards high-level policy challenges facing organizations, civil society and government, and which when addressed enable the full realization of economic growth triggered by AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, our conclusions are a step-away from actionable policy recommendations. Given your experience with and within India’s AI based ecosystem, we invite you to deliberate and recommend insights and strategies that can help us arrive at concrete and practicable policy recommendations towards achieving a growth and welfare enhancing AI-based ecosystem in India.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Proposed Questions for Deliberation&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;In which sectors do we observe an immediate opportunity for the adoption of AI? What could be the nature of these applications?&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;In which areas of AI development and application is there an immediate opportunity for governments, industry and academia to collaborate?&lt;/li&gt;
&lt;li&gt;What should be the Government’s top five priorities in the next one year to catalyse the growth of AI in India?&lt;/li&gt;
&lt;li&gt;How and what agencies of the Government should be involved in implementation of India’s National AI mission?&lt;/li&gt;
&lt;li&gt;What aspects of the Government’s capacity requires enhancement to adapt to challenges of a growing Indian AI based ecosystem?&lt;/li&gt;
&lt;li&gt;What measures can the Government take to regulate for AI safety and ethical use of AI?&lt;/li&gt;
&lt;li&gt;What are the policy measures that the Government can undertake to safeguard against the consequences of AI based inequality?&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier'&gt;https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-10T01:46:36Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/huffington-post-gopal-sathe-july-4-2019-fintech-apps-privacy-snooping-credit-vidya">
    <title>How Sai Baba Was Made To Spy On Your Phone For Credit Ratings</title>
    <link>https://cis-india.org/internet-governance/news/huffington-post-gopal-sathe-july-4-2019-fintech-apps-privacy-snooping-credit-vidya</link>
    <description>
        &lt;b&gt;Researchers revealed that Hyderabad-based CreditVidya—a highly successful fintech company that rated people’s creditworthiness—collected data from people using music apps and Sai Baba apps.&lt;/b&gt;
        &lt;p&gt;The article by Gopal Sathe was &lt;a class="external-link" href="https://www.huffingtonpost.in/entry/fintech-apps-privacy-snooping-credit-vidya_in_5d1cbc34e4b082e55373370a"&gt;published by Huffington Post&lt;/a&gt; on July 4, 2019. CIS research was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;An Indian start-up that few outside the fintech industry would have heard of embedded tracking software inside popular apps, including one that streamed Sai Baba&lt;em&gt; &lt;/em&gt;stories and another that streamed Ilaiyaraaja songs, to scoop up sensitive user data including GPS locations, and business SMSes from ecommerce sites and banks to monitor spending activity, personal contacts, and much more, &lt;em&gt;HuffPost India&lt;/em&gt; has found.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CreditVidya, a Hyderabad-based fin-tech company, ran this snooping code (technically known as a Software Development Kit or SDK) for several months in 2017 until a new version of Google’s Android operating system made it harder to scrape such data. The data, scooped up from users, was used to power CreditVidya’s self-learning algorithms that help lending companies determine the credit-worthiness of loan applicants. (Fin-tech is industry speak for financial technology, a fast growing category of software firms).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;SDKs like the one developed by CreditVidya are called “Middleware”. If you assume an app is like a machine, middleware would be a component or a cog in that machine. As apps grow more complex, developers often rely on middleware developed by third parties, increasing the risk that user data is scraped and sold on for a fee.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Upon installing these apps, many of which were developed by a third party app developer call Winjit, users would have been asked for access permissions that are increasingly common and intrusive, but would have had no idea that their personal data was being scraped and sold further in a manner that could affect their credit-worthiness.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Even though there might not be proper notice / informed consent, at least it’s understandable that lending apps that user uses is downloaded consciously and some night have knowledge on the fact that app,” said Srikanth L., a contributor to Cashless Consumer, a collective studying digital payments and fintech businesses in India. “The Creditvidya SDK was also found in a Sai Baba app, Ilaiyaraaja Hits app and other music apps of popular record labels with its SDK where user is clueless about this background data collection.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Thus a user could consent to an app collecting data without knowing how such data would be used.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CreditVidya, Srikanth said, “used the data from unsuspecting users as part of the huge database it uses to generate the trust score, but there is opaqueness about where this data comes from and how many data brokers were engaged in trading personal data with companies like CreditVidya.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Worse, given that many of these algorithms are proprietary and hence un-auditable, it is unclear if these credit-rating apps even work. Users could find themselves denied credit, or charged high interest rates on the basis of purely arbitrary decision making by CreditVidya algorithms trained on data scraped on the sly.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Given how untransparent the industry is,” said Fredrike Kaltheuner, from the Data Exploitation Programme of Privacy International, a privacy-focused global non-profit organisation that investigates and advocates for user privacy. “It’s hard to say if this information is actually helping anyone get a loan. There are a lot of companies in this space now, but their algorithms are a black box, and the data they use is usually not clear either.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CreditVidya and Winjit did not reply to &lt;em&gt;HuffPost India&lt;/em&gt;’s emailed requests for comment. We will update this story if the companies share a response.&lt;/p&gt;
&lt;h3&gt;Meet CreditVidya&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;CreditVidya does not offer loans directly to consumers. Instead, the company offers its services to over 50 lenders, ranging from banks like Axis Bank, DBS, Yes Bank, and financing companies like Tata Capital, TVS Credit, and Hero FinCorp, according CreditVidya’s website.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This means that when consumers approach these companies for loans, CreditVidya’s software helps determine if the loan should be given or not. To do so, the company compares a given loan application with its giant database, to evolve something called “Trust-score” that, the company claims, determines if the applicant is likely to pay back the loan.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The company raised Series A funding from Kalaari Capital, and Matrix Partners joined in its Series B round. It has raised a third round of funding as well, led by the Bharat Innovation Fund. One of the partners at the fund is Sanjay Jain, former Chief Product Officer at the UIDAI, and a volunteer at Bengaluru-based think-tank iSPIRT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In a &lt;a href="https://medium.com/bharat-innovations-fund/why-we-invested-in-creditvidya-18a3b404af40" target="_blank"&gt;blog post&lt;/a&gt;, Kailash Nath, a Senior Associate at Bharat Innovation Fund wrote that CreditVidya processes over 500GB of data every day. It uses data related to over 10,000 parameters to assess creditworthiness, and plugs its SDK into the lenders’ apps, to make the decision to approve the loan or not. He added that the platform has processed over 25 million profiles so far. The post does not mention anything about the sources of this vast amount of data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It’s not necessary that the data is coming from nefarious means,” said Saravanan K., a Bengaluru-based security consultant. “There could be any number of ways in which the company has acquired this data, and a lot of it is above board — people aren’t always aware of what they are signing up for, where they are giving their data.”&lt;/p&gt;
&lt;p&gt;“Your phone number acts as a unifying element, and then the amount of data that becomes available about you simply from offline sources will boggle your mind. But getting data directly from your phone can be very valuable, because it’s happening in real time and gives a very clear picture of what you are doing.”&lt;/p&gt;
&lt;p&gt;The companies doing all this data gathering are keeping quiet about the matter. For example, Srikanth found CreditVidya’s SDK in a number of applications made by Winjit, which has developed a number of music apps, including for huge companies like Times Music. However, the nature of the relationship between the two companies is not clear; nor have they made any public statement on why Winjit’s apps on music carried CreditVidya’s lending SDK.&lt;/p&gt;
&lt;p&gt;When a user downloaded a Winjit app, it would create a profile linked to their phone number, and then update this, analysis of the SDK by Cashless Consumer showed. APIs in the SDK revealed code for the user being initialised, and the data being updated.&lt;/p&gt;
&lt;p&gt;A &lt;a href="https://cis-india.org/internet-governance/blog/aayush-rathi-and-shweta-mohandas-april-30-2019-fintech-in-india-a-study-of-privacy-and-security-commitments" target="_blank"&gt;report&lt;/a&gt; by Aayush Rathi and Shweta Mohandas for the Centre for Internet and Society that researched the privacy commitments taken by Indian fin-tech companies also goes over some of this ground.&lt;/p&gt;
&lt;p&gt;“The unprecedented growth of this sector with a number of players that have an amorphous nature (not banking entities) has concomitantly come with regulatory challenges around inter alia privacy and security concerns,” Rathi and Mohandas say in their report. “For instance, a survey of 1,300 senior executives in the global financial services, and fintech industries revealed that 54% of respondents identified privacy and data protection as barriers to fintech innovation.”&lt;/p&gt;
&lt;p&gt;They also noted that a study stated identified that 79.4 percent of the surveyed participants stated that they did not read the privacy policies and only 11 percent of them stated that they understood them. They also wrote that another study conducted on the most popular apps in India also observed that the privacy policies were drafted to protect the service providers from liability, rather than to help the consumers.&lt;/p&gt;
&lt;p&gt;What’s in the SDK?&lt;/p&gt;
&lt;p&gt;Analysis of the SDK by Srikanth suggests CreditVidya collected the following info:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Mobile IMEI&lt;/li&gt;
&lt;li&gt;All contacts&lt;/li&gt;
&lt;li&gt;Measured frequency of SIM changes to see if this is a person who frequently swaps SIMs&lt;/li&gt;
&lt;li&gt;GPS location&lt;/li&gt;
&lt;li&gt;Business SMS to monitor spending activity&lt;/li&gt;
&lt;li&gt;Wifi ON/OFF&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Given that CreditVidya talks of over 10,000 data points, it’s safe to say that this is not all the information that the company is collecting about potential borrowers. What’s particularly worrying in this case though is how the information was being collected through applications that have nothing to do with lending.&lt;/p&gt;
&lt;p&gt;“They are collecting user specific data, and also location specific data for demographic mapping,” said Srikanth L. of Cashless Consumer.&lt;/p&gt;
&lt;blockquote class="pull-quote content-list-component"&gt;Getting data directly from your phone can be very valuable, because it’s happening in real time and gives a very clear picture of what you are doing.&lt;/blockquote&gt;
&lt;p&gt;Kaltheuner, from Privacy International, said this kind of arrangement with SDKs is not uncommon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“A lot of researchers have come across such arrangements,” said Kaltheuner, “but it is very hard to find actual evidence.” In that sense, the work done by Cashless Consumer is very important, she added, as it shows how companies are quietly collecting user data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“But a bigger concern is the use of pre-installed applications for tracking,” she added. “These apps are installed by the phone manufacturers, or by the telecom companies, and that’s how you get very cheap smartphones being subsidised by third party trackers.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“These pre-installed trackers often don’t need to ask you for permission before getting access to your data, and they can have access to deeper information than the third-party trackers,” she said. This is made worse by how opaque the industry is; information flows in only one direction.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Middleware is very hard to track because there are a number of ways in which companies are going around regulations. Even if a developer doesn’t mean to take your data, it’s often very hard to know what all an SDK is going to do. This is a systemic problem in the industry, with a lot of reliance on third party software.”&lt;/p&gt;
&lt;h3&gt;Standard procedure in India&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Although a number of developers who spoke to &lt;em&gt;HuffPost India&lt;/em&gt; confirmed that practices like these are common in the Indian ecosystem, they refused to go on the record, explaining that this is normal business practice, and speaking out about it will lead to a loss of opportunities in the future.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The big change was &lt;a href="https://www.huffingtonpost.in/news/google/"&gt;Google&lt;/a&gt; cracking down on this stuff, but otherwise it’s all over the place,” one developer based in Bengaluru said. “Like, there’s a company in Bombay whose business model is to offer its SDK for apps, and it basically gives you solutions like OTP capture — but it also keeps tracking SMS data afterwards, which is used to build a financial profile. And they offer a cut for doing this, so it subsidises the cost of developing the app.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another developer said that IBM’s analytics middleware has also created similar problems but refused to give any details fearing reprisals from the company which has offered his startup projects in the past. However, IBM denied the allegation—a representative said that it would require more technical details from the developer to give a detailed response, but the developer refused to share further information.&lt;/p&gt;
&lt;p&gt;But the problem is actually not limited to India. In May 2019, mobile app developer QuarkWorks found that one of its apps on the Google Play store was flagged and removed for violating store policies. &lt;a href="https://medium.com/quark-works/why-our-app-got-removed-from-the-google-play-store-how-we-fixed-it-4c8d430eafa0" target="_blank"&gt;According&lt;/a&gt; to  Devun Schmutzler, Native Mobile Developer QuarkWorks, Google said their app was violating Android’s advertising ID policy.&lt;/p&gt;
&lt;p&gt;Google had identified that the app collected and transmitted the Android advertising identifier, which could be used to identify and target a user.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Except, according to Schmutzler, the app wasn’t either collecting, or transmitting any data as far as the developers were aware. It was at this point that the team carried out an investigation into the matter, and found their app was using an old version of Fabric Crashlytics—middleware developed by a third party, which was embedded in the Quarkworks app to analyze crashes and other software errors. The Crashlytics component was collecting this information without Quarkworks’s knowledge.&lt;/p&gt;
&lt;p&gt;But this was just the only bit of middleware they found tracking sensitive user information.&lt;/p&gt;
&lt;p&gt;Firebase, which is a mobile and Web development platform acquired by Google also does this, though it’s very easy to change the settings to stop sending this data, Schmutzler noted.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;OneSignal, which is used for high volume mobile and Web push notifications also tracks this user information, and QuarkWorks had to tweak the app to limit the data being shared. These were just the ones found in the case of a small app with limited libraries by one developer, but given the scale of the industry, the number of providers that are collecting user data in an opaque manner is simply staggering.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google and &lt;a href="https://www.huffingtonpost.in/news/apple/"&gt;Apple&lt;/a&gt; have evolved policies against the sharing of background data through apps which are available online. Although the companies did not share details about the size of teams in India that audit apps, for both platforms privacy has become a big talking point with &lt;a href="https://www.huffingtonpost.in/2018/10/19/more-faceid-more-encryption-less-spam-is-privacy-the-best-reason-to-buy-apple-iphones_a_23564577/"&gt;Apple highlighting this&lt;/a&gt; for multiple years now, and Google also &lt;a href="https://venturebeat.com/2019/05/10/ai-weekly-google-focused-on-privacy-at-i-o-2019/" target="_blank"&gt;strongly talking&lt;/a&gt; about privacy in the last Google IO developer conference.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India though, companies like this are likely to soon get another tool to use to track and profile users—&lt;a href="https://www.huffingtonpost.in/news/aadhaar/"&gt;Aadhaar&lt;/a&gt;. The Aadhaar Amendment bill is expected to pass in the Lok Sabha, and once it becomes a law, the use of Aadhaar by the private sector opens up again.&lt;/p&gt;
&lt;p&gt;Once that happens, aside from your phone number, there is also a permanent, immutable identity that can be used to track a person, or collate their information.&lt;/p&gt;
&lt;h3&gt;Is this data even useful?&lt;/h3&gt;
&lt;p&gt;It is possible that companies are compromising users’ privacy on a broad scale, but coming up with results that are not more accurate than traditional lending was.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;em&gt;HuffPost India&lt;/em&gt; reached out to several lending companies who did not wish to comment on this story once we explained that it was about the covert collection of user data, in the past, some of these companies have commented about the use of data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Speaking to this reporter in &lt;a href="https://gadgets.ndtv.com/apps/features/bengaluru-based-moneytap-on-why-its-happy-to-reject-95-percent-of-its-potential-customers-1670309" target="_blank"&gt;the past&lt;/a&gt;, Bala Parthasarathy, the Chairman and CEO of lending app MoneyTap said that “the data is not sophisticated enough. We use mostly traditional data. Right now, there are a lot of low hanging fruit whom the banks are too rigid for, and that’s where we can make a difference.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Typically, companies look at a number of different factors, so they’ll look at your account data, or they might read your SMS messages to track your spending,” he had said. “This is of course a privacy concern. But they read your transaction SMSes to understand your financial history. They might take a look at the apps on your phone, or your social media logins to see what kind of relationships you have, how strong a local circle you have, so they know you’re not going to disappear.”&lt;/p&gt;
&lt;p&gt;MoneyTap, on the other hand, he said was mostly using user data only to make filling the forms simpler since they had to be entered through the company’s app on the phone.&lt;/p&gt;
&lt;p&gt;As Privacy International’s Kaltheuner pointed out—such algorithms being a black box means that there is no clarity on whether anyone is actually benefiting from such use of data, yet it’s quickly becoming the norm.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/huffington-post-gopal-sathe-july-4-2019-fintech-apps-privacy-snooping-credit-vidya'&gt;https://cis-india.org/internet-governance/news/huffington-post-gopal-sathe-july-4-2019-fintech-apps-privacy-snooping-credit-vidya&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Gopal Sathe</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-08T14:04:35Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/didp-34-on-granular-detail-on-icanns-budget-for-policy-development-process">
    <title>DIDP #34 On granular detail on ICANN's budget for policy development process </title>
    <link>https://cis-india.org/internet-governance/blog/didp-34-on-granular-detail-on-icanns-budget-for-policy-development-process</link>
    <description>
        &lt;b&gt;ICANN has Advisory Committees which help guide the policy recommendations that the ICANN community develops while its Supporting Organizations are charged with developing policy recommendations for a particular aspect of ICANN's operations. Supporting Organizations are composed of volunteers from the community. ICANN publishes a combined budget for all these bodies under the head of policy development and CIS inquired about the financial resources allocated to each of them specifically. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The ICANN budgets are published for public comment yet the  community does not have supporting documents to illustrate how the  numbers were estimated or the rationale for allocation of the resources.  There is a lack of transparency when it comes to the internal budgeting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This DIDP is concerned with the policy development budget which, as  Stephanie Perrin of the Non-Commercial Stakeholder Group pointed out,  was merely 5% of ICANN’s total budget, a number significantly low for a  policy making organization. Thus, the information we request is a  detailed breakdown for the budgets for every Advisory Council as well as  Supporting Organizations for the previous fiscal year. You can find the  &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/didp-on-budget/"&gt;attached request here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/didp-34-on-granular-detail-on-icanns-budget-for-policy-development-process'&gt;https://cis-india.org/internet-governance/blog/didp-34-on-granular-detail-on-icanns-budget-for-policy-development-process&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>akriti</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>ICANN</dc:subject>
    
    
        <dc:subject>DIDP</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-06T01:23:55Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/pibplans-a-fact-checking-unit-to-counter-fake-news">
    <title>PIB plans a fact-checking unit to counter fake news</title>
    <link>https://cis-india.org/internet-governance/news/pibplans-a-fact-checking-unit-to-counter-fake-news</link>
    <description>
        &lt;b&gt;Countering fake news has been high on the government’s agenda; in 2016; the MIB suggested expanding its analytics wing to monitor social media and set up an early warning system for possible flashpoints that the government may be unprepared for.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Smriti Kak Ramachandran was published in the &lt;a class="external-link" href="https://www.hindustantimes.com/india-news/pib-plans-a-fact-checking-unit-to-counter-fake-news/story-BwNk8Y0TTj5WThE2Cy8BFI.html"&gt;Hindustan Times&lt;/a&gt; on July 3, 2019. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;The Press Information Bureau (PIB), the government’s nodal agency for dissemination of information, has decided to set up of a fact checking unit to identify and counter any fake news about the government and its policies circulating on social media platforms.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to a senior functionary aware of the development, the ministry of information and broadcasting (MIB), under which PIB is a unit has approved a plan to counter fake news in real-time. No deadline has been set so far for the project to take off, but it is expected to pick pace over the coming weeks. Details of how the tracking will be done, and the kind of accounts that will be tracked, weren’t immediately available.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The fact check unit will have officials from the PIB as well as employees hired on contract to monitor platforms such as Twitter, Facebook and Youtube to flag news that is fake and has the potential for creating social unrest.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We will monitor and detect anything related to the government that is blatantly wrong, and put out correct information to ensure that people do not fall for wrong news,” the functionary quoted above said on condition of anonymity. He added that the possibility of penal action against those accounts found circulating fake news has not been discussed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We had a training session with a Hyderabad-based organisation, which does work in fact checking and putting out data that is meant for the public. Their experts helped us brainstorm on how to proceed with it,” the official said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Countering fake news has been high on the government’s agenda; in 2016; the MIB suggested expanding its analytics wing to monitor social media and set up an early warning system for possible flashpoints that the government may be unprepared for. The social media analytics wing of the ministry, which is now defunct, scrutinized posts on social media platforms to generate reports for the Prime Minister’s Office, the National Security Advisor’ s Office and various intelligence bureaus, aside from ministries including home affairs, external affairs and defence.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In 2018, the ministry constituted a committee to frame rules to regulate news portals and media websites. During the recently concluded Lok Sabha election, the election commission also worked with social media platforms to identify and pull down posts that were fake and could lead to vitiating the elections.&lt;br /&gt;As per EC’s data, 650 posts were taken down by Facebook for voter misinformation, hate speech, violation of the model code of conduct and public morality and decency. Similarly, Twitter took down 220 posts, Sharechat 31, Google 5 and Whatsapp, three.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Commenting on the government’s move to set up the fact check unit, Sunil Abraham, of the Centre for Internet and Society, a research organisation said, “It is a good move; but what the government also needs to do is to have a policy in place which makes it necessary for social media companies to pay for the negative externalities being circulated. If they make a certain amount in revenue from advertising then on a similar scale they need to fund the fact checking ecosystem.”&lt;br /&gt;On Monday Congress leader Digvijay Singh also demanded a policy to check fake news. Speaking during Zero Hour in the Rajya Sabha, Singh said fake news is more dangerous than terrorism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He said fake news and unparliamentary language used on social media platforms trigger communal riots and create societal divide. “Many people (tweeting fake news) are followed by big people,” he said without naming anyone.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/pibplans-a-fact-checking-unit-to-counter-fake-news'&gt;https://cis-india.org/internet-governance/news/pibplans-a-fact-checking-unit-to-counter-fake-news&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Smriti Kak Ramachandran</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-05T02:31:49Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy">
    <title>Setting the Agenda: A Behavioural Science approach to Data Privacy</title>
    <link>https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy</link>
    <description>
        &lt;b&gt;Amber Sinha attended a meeting organised by the Centre for Social Behaviour Change (CSBC) at Ashoka University and the Busara Center for Behavioral Economics on 26 June 2019 at CSBC office, Vasant Vihar in New Delhi.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The session brought together a small group (8-12) of critical players from industry, academia, and the public sector to solicit inputs on the structure and content of India’s first experiment-based behavioural research on data privacy. This body of research, set to launch in the next few months,         will use a behavioural science approach to answer 4 main topics         facing data privacy: (1) consent practices, (2) business         advantages for enhanced privacy, (3) willingness to pay, and (4)         nudges to improve engagement in privacy. Equipped with a         behavioural science toolkit, we aim to produce new evidence         through lab and field experiments that help define best         practices in data privacy across these topics. More info &lt;a class="external-link" href="http://https//docs.google.com/forms/d/e/1FAIpQLSdeO82nsXJLR09P5BJBvxxfPEF7rn4t3RG5W7CvMXbFM3MGKg/viewform"&gt;here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy'&gt;https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-07-04T16:47:31Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/digital-id-forum-2019">
    <title>Digital ID Forum 2019</title>
    <link>https://cis-india.org/internet-governance/news/digital-id-forum-2019</link>
    <description>
        &lt;b&gt;Sunil Abraham was one of the panelists at this event at Chulalongkorn University on July 3, 2019.&lt;/b&gt;
        &lt;p&gt;&lt;img src="https://cis-india.org/home-images/DigitalID.png" alt="Digital ID" class="image-inline" title="Digital ID" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;Click to &lt;/span&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/digital-id-forum"&gt;view the agenda&lt;/a&gt;&lt;span&gt;. Also see &lt;/span&gt;&lt;a class="external-link" href="https://en.wikipedia.org/wiki/Asia_Source"&gt;Wikipedia page&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/digital-id-forum-2019'&gt;https://cis-india.org/internet-governance/news/digital-id-forum-2019&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Digital ID</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Appropriate Use of Digital ID</dc:subject>
    
    
        <dc:subject>Digital Identity</dc:subject>
    

   <dc:date>2019-08-07T14:09:16Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet">
    <title>The Impact of Consolidation in the Internet Economy on the Evolution of the Internet </title>
    <link>https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet</link>
    <description>
        &lt;b&gt;The Centre for Internet and Society in partnership with the Internet Society organized an event on the impact of consolidation in the Internet economy. It was divided into two roundtable discussions, the first one focusing on the policies and regulation while the latter dealt with the technical evolution of the Internet. This report contributed to the Internet Society’s 2019 Global Internet Report on Consolidation in the Internet Economy.&lt;/b&gt;
        &lt;p&gt;Edited by Swaraj Barooah, Elonnai Hickok and Vishnu Ramachandran. Inputs by Swagam Dasgupta&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;This report is a summary of the proceedings of the roundtables organized by the Centre for Internet and Society in partnership with the Internet Society on the impact of consolidation in the Internet economy. It was conducted under the Chatham House Rule, at The Energy and Resource Institute, Bangalore on the 29 June 2018 from 11AM to 4PM. This report was authored on 29 June 2018, and subsequently edited for readability on 25 June 2019. This report contributed to the Internet Society’s 2019 Global Internet Report on Consolidation in the Internet Economy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The roundtables aimed to analyze how growing forces of consolidation, including concentration, vertical and horizontal integration, and barriers to market entry and competition would influence the Internet in the next 3 to 5 years.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To provide for sufficient investigation, the discussions were divided across two sessions. The focus of the first group was the impact of consolidation on applicable regulatory andpolicy norms including regulation of internet services, the potential to secure or undermine people’s ability to choose services, and the overall impact on the political economy. Thesecond discussion delved into the effect of consolidation on the technical evolution of the internet (in terms of standards, tools and software practices) and consumer choices (interms of standards of privacy, security, other human rights).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The sessions had participants from the private sector (2), research (4), government (1), technical community (3) and civil society organizations (6). Five women and eleven men constituted the participant list.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/files/isoc-report.pdf"&gt;&lt;strong&gt;Click to download and read the full report&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet'&gt;https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Akriti Bopanna and Gurshabad Grover</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-07-03T12:53:53Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious">
    <title>Facebook’s Libra: A bit too ambitious?</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious</link>
    <description>
        &lt;b&gt;Power desperately finds ways to propagate at the slightest hint of losing lustre, which the social network is beginning to experience at the moment.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Rajmohan Sudhakar was published &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-lifestyle/facebook-s-libra-a-bit-too-ambitious-743890.html"&gt;in the Deccan Herald&lt;/a&gt; on June 30, 2019. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;A social network must remain a social network and first prove it is indeed one. Not a bank. Damning revelations concerning ethics, privacy and transparency notwithstanding.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For a tech giant with an exclusive window to our private lives, launching its own currency is of course the natural course, to coax us back into compliance, away from brewing skepticism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Facebook’s sketchy white paper on Libra, its proposed virtual currency, probably drawing much from the libertarian Bitcoin, is no doubt an attempt to capitalise on the 2.4 billion users it has collected in such a short span, eventually to wield a grip on the still vulnerable global financial system gradually recovering from a crisis.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, the brute power of big tech looms large over sovereign nation states.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In his piece in The Atlantic, professor Eric Posner of the University of Chicago Law School writes Libra will replicate all the current problems generated by Facebook. “In the name of eliminating inefficiency and injustice in the financial system around the globe, Facebook’s new cryptocurrency threatens to replay what’s become a familiar story—of tech companies blithely reshaping the world around them, and significantly increasing their power over people’s lives, while being accountable to no one,” the professor goes on to say.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Power finds ways to propagate at the slightest hint of losing lustre, which the social network is beginning to experience at the moment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To salvage that lost credibility, Facebook has teamed up with a group of corporates including Visa, Uber, Spotify, PayPal and the like to form the Libra Association, a Geneva-based not-for-profit, just to disrupt and dominate the global economic future.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Given the social network’s massive reach, the proposal, if realised, could transform international payments. That said, the intent is suspect.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The main concern with Facebook entering the cryptocurrency market is a question of competition law. Facebook can use its market dominance in online advertising, end-to-end encrypted messaging and social media to engage in anti-competitive behaviour, warns Sunil Abraham of The Centre for Internet and Society, a Bengaluru-based not-for-profit.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Facebook stands 90th in the world in terms of GDP, along with many other firms of such standing which would want to effect and dictate global monetary policy if the project goes ahead and realises its commercial goal, which in all likelihood is very much possible. Because, Facebook has built its applications and networks already. It doesn’t require to sign people up. Lambs for slaughter indeed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The argument that the network puts forward of uplifting billions excluded from the financial ecosystem in the emerging world, in earnest is a genuine cause. But, one doesn’t need a brand new private currency pepped up by a big tech consortium for that. For instance, in India, where the network has at least 300 million users, Libra cannot operate within the existing norms. Moreover, a foreign private entity controlling the sovereign is rather dystopian.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“So far, India has taken quite a conservative policy towards cryptocurrencies because of concerns from a counter-terrorism and taxation perspective. It is perhaps good that policymakers were waiting and watching. Now the policy will have to address the challenge posed by Facebook entering this market,” notes Abraham.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Cryptocurrency is ‘virtual money’ which should only be used for non-financial purposes along the lines of loyalty points, airline miles, etc. which could be exchanged for real cash transactions such as discounts. It poses the danger of fraud. If Facebook uses Libra to clock up loyalty points, that’s fine. But can’t link it to your bank account,” cautions Kiran Mazumdar-Shaw.To digress, bitcoin works as people trust it over government-regulated money. It has no reserve. It appreciates if more people are using it. Now, Facebook is offering a tweak. Unlike Bitcoin, Libra will have a reserve set up by the consortium. In short, don’t trust the government, not the liberal bitcoin, but a private entity with a reserve.&lt;br /&gt;Facebook’s argument may be convincing provided the clout, security infrastructure and reach. But what is worrying is Facebook-led big tech now wants to be a virtual country, with its own economy, not having to bow down to nation states and regulators, because they think they are powerful enough to do so.&lt;br /&gt;This would definitely have banks and regulators in a fix. “While Facebook might have a surveillance friendly position on KYC requirements for users, it might be in greater consumer interest for the government to ensure there is an interoperable competitive oligopoly of cryptocurrency service providers that prevent the winner takes all phenomenon,” observes Abraham.&lt;br /&gt;What’s Libra anyway?&lt;br /&gt;Libra is a virtual currency proposed by a Facebook-led private consortium, with a reserve, unlike the Bitcoin. That means Libra would have an initial capital funded by the consortium to help make it a stable currency such as the dollar. Users could trade globally enjoying lower costs, enhanced speed and high security - on platforms run by Facebook (Calibra) and associates. According to Facebook, Libra is intended to ensure financial inclusion.  But how this will be achieved in countries like India, where the currency would not charm the people, government or regulators easily, is not yet quite clear.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious'&gt;https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rajmohan Sudhakar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-02T05:14:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/about/newsletters/june-2019-newsletter">
    <title>June 2019 Newsletter</title>
    <link>https://cis-india.org/about/newsletters/june-2019-newsletter</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society Newsletter for June 2019.&lt;/b&gt;
        &lt;h2&gt;&lt;span&gt;Highlights for June 2019&lt;/span&gt;&lt;/h2&gt;
&lt;table class="listing"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;&lt;span&gt;Rohini Lakshané and Shweta Mohandas have updated their paper '&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/rohini-lakshane-and-shweta-mohandas-june-30-2019-joining-the-dots-in-india-s-big-ticket-mobile-phone-sep-litigation"&gt;Joining the Dots in India's Big-Ticket Mobile Phone SEP Litigation&lt;/a&gt;' which chronicles mobile device SEP litigation in India. All developments in the lawsuits filed in the Delhi High Court and complaints made to the CCI that were published in reliable sources till 20 September 2018 are mentioned in this paper.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Christ University in association with CIS-A2K organized the &lt;a class="external-link" href="https://cis-india.org/a2k/blogs/subodh-kulkarni-june-22-2019-wikimedia-education-saarc-conference-2019"&gt;Wikimedia Education SAARC Conference 2019&lt;/a&gt;, from 20 - 22 June 2019, at Christ University, Bengaluru. Forty-nine Wikimedians from four countries (Sri Lanka, Nepal, Bangladesh, and India) participated in the event. Subodh Kulkarni in his report reveals that the event discussed challenges pertaining to retention and quality and methodologies to evaluate and measure the work being done on Wikipedia.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;CIS and Internet Society organized an event on the impact of consolidation in the Internet economy. The round-tables focused on the policies and regulation and technical evolution of the Internet. &lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet"&gt;The report&lt;/a&gt; by Akriti Bopanna and Gurshabad Grover contributed to the Internet Society’s 2019 Global Internet Report on Consolidation in the Internet Economy.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;An &lt;a class="external-link" href="https://cis-india.org/raw/sadaf-khan-data-bleeding-everywhere-a-story-of-period-trackers"&gt;excerpt from an essay&lt;/a&gt; by Sadaf Khan, written for and published as part of the Bodies of Evidence collection of Deep Dives was mirrored on the website. The Bodies of Evidence collection, edited by Bishakha Datta and Richa Kaul Padte, is a collaboration between Point of View and the Centre for Internet and Society, undertaken as part of the Big Data for Development Network supported by International Development Research Centre, Canada.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;span&gt;CIS and the News&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;The following news pieces were authored by CIS and published on its website in May:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/blog/shyam-ponappa-business-standard-june-6-2019-5g-aspirations-and-realities"&gt;5G Aspirations and Realities&lt;/a&gt; (Shyam Ponappa; Business Standard; June 6, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/indian-express-nishant-shah-june-16-2019-staying-silent-about-cyberbullying-is-no-longer-an-option"&gt;Staying silent about cyberbullying is no longer an option&lt;/a&gt; (Nishant Shah; Indian Express; June 16, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/indian-express-nishant-shah-june-30-2019-facebook-sees-its-salvation-with-its-cryptocurrency-libra"&gt;Facebook sees its salvation with its cryptocurrency Libra&lt;/a&gt; (Nishant Shah; Indian Express; June 30, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h3&gt;CIS in the News&lt;/h3&gt;
&lt;p&gt;CIS was quoted in these news articles published elsewhere:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/news/livemint-navadha-pandey-june-4-2019-plugging-into-indias-broadband-revolution"&gt;Plugging into India’s broadband  revolution&lt;/a&gt; (Navadha Pandey; Livemint; June 4, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/economic-times-anjali-venugopalan-june-4-2019-banking-on-artificial-intelligence"&gt;Banking on artificial intelligence: In hiring drive, Bots are calling the shots now&lt;/a&gt; (Anjali Venugopal; Economic Times; June 4, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/columbia-journal-of-asia-law-june-6-2019-ricardo-vecellio-segate"&gt;Fragmenting Cybersecurity Norms through the Language(s) of Subalternity: India in the East and the Global Community &lt;/a&gt;(Riccardo Vecellio Segate; Columbia Journal of Law; June 6, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/abigail-d-pershing-yale-journal-of-international-law-interpreting-the-outer-space-treaty-s-non-appropriation-principle"&gt;Interpreting the Outer Space Treaty's Non-Appropriation Principle: Customary International Law from 1967 to Today&lt;/a&gt; (Abigail D. Pershing; Yale Journal of International Law; June 6, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/the-news-minute-shilpa-s-ranipeta-june-10-2019-no-fintech-company-meets-every-single-privacy-requirement-under-it-act-cis-report"&gt;No Fintech company meets every single privacy requirement under IT Act: CIS report&lt;/a&gt; (Shilpa S. Ranipeta; Newsminute; June 10, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/geetika-mantri-june-14-2019-the-news-minute-facebook-to-pay-indians-to-give-up-privacy"&gt;Facebook to pay Indians to give up privacy: Experts raise questions&lt;/a&gt; (Geetika Mantri; Newsminute; June 14, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/indian-express-june-23-2019-chasing-fame-and-fun-15-seconds-at-a-time"&gt;Chasing fame and fun 15 seconds at a time: Why TikTok has India hooked&lt;/a&gt; (Tora Agarwala, Surbhi Gupta, and Karishma Mehrotra; Indian Express; June 23, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/deccan-herald-june-30-2019-rajmohan-sudhakar-facebook-s-libra-a-bit-too-ambitious"&gt;Facebook’s Libra: A bit too ambitious?&lt;/a&gt; (Rajmohan Sudhakar; Deccan Herald; June 30, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;h2&gt;&lt;a class="external-link" href="https://cis-india.org/a2k"&gt;Access to Knowledge&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Our Access to Knowledge programme currently consists of                  two projects. The Pervasive Technologies project,                  conducted under a grant from the International                  Development Research Centre (IDRC), aims to conduct                  research on the complex interplay between low-cost                  pervasive technologies and intellectual property, in                  order to encourage the proliferation and development of                  such technologies as a social good. The Wikipedia                  project, which is under a grant from the Wikimedia                  Foundation, is for the growth of Indic language                  communities and projects by designing community                  collaborations and partnerships that recruit and                  cultivate new editors and explore innovative approaches                  to building projects.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Copyright &amp;amp; Patent&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Research Paper&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/rohini-lakshane-and-shweta-mohandas-june-30-2019-joining-the-dots-in-india-s-big-ticket-mobile-phone-sep-litigation"&gt;Joining the Dots in India's Big-Ticket Mobile Phone SEP Litigation&lt;/a&gt; (Rohini Lakshané and Shweta Mohandas; June 30, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;h3 style="text-align: justify; "&gt;Wikipdedia&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;As part of the &lt;a href="http://cis-india.org/a2k/access-to-knowledge-program-plan"&gt;project                   grant from the Wikimedia Foundation&lt;/a&gt; we have                 reached out to more than 3500 people across  India by                 organizing more than 100 outreach events and  catalysed                 the release of encyclopaedic and other content  under the                 Creative Commons (CC-BY-3.0) license in four  Indian                 languages (21 books in Telugu, 13 in Odia, 4  volumes of                 encyclopaedia in Konkani and 6 volumes in  Kannada, and 1                 book on Odia language history in  English).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Blog Entries&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/what-is-wikimedia-education-saarc-conference-1"&gt;What is Wikimedia Education SAARC Conference?&lt;/a&gt; (Sailesh Patnaik; June 5, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/wikimedians-at-all-india-radio-mangaluru"&gt;Karavali Wikimedians at All India Radio, Mangaluru&lt;/a&gt; (Bharathesha Alasandemajalu; June 13, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/subodh-kulkarni-june-22-2019-wikimedia-education-saarc-conference-2019"&gt;Wikimedia Education SAARC Conference 2019&lt;/a&gt; (Subodh Kulkarni; June 22, 2019). &lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/blogs/svg-translation-workshop-kannada-2"&gt;SVG translation workshop Kannada&lt;/a&gt; (Gopala Krishna; June 29, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Jobs&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;CIS-A2K team is seeking applications for the following posts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/cis-a2k-communication-officer-position"&gt;Communication Officer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/project-tiger-2019-coordinator-position-open"&gt;Project Tiger 2019 Coordinator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/wikidata-advisor-consultant-position-open"&gt;Wikidata Advisor&lt;/a&gt; (Consultant)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Event Organized&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/a2k/events/wikimedia-education-saarc-conference"&gt;Wikimedia Education SAARC conference&lt;/a&gt; (Christ University; Bangalore; &lt;span&gt;June 20 - 22, 2019&lt;/span&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Openness&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Innovation and creativity are fostered through openness and collaboration. The advent of the Internet radically defined what it means to be open and collaborative. The Internet itself is built upon open standards and free/libre/open source software. Our work in the Openness programme focuses on open data, especially open government data, open access, open education resources, open knowledge in Indic languages, open media, and open technologies and standards - hardware and software.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Event Hosted&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/openness/events/discussion-on-open-standards-with-bernd-erk-and-jiten-vaidya"&gt;Discussion on Open Standards with Bernd Erk and Jiten Vaidya&lt;/a&gt; (CIS, Bangalore; June 20, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Participation in Event&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/openness/news/rootconf-2019"&gt;Rootconf 2019&lt;/a&gt; (Organized by Has Geek; NIMHANS Convention Centre, Bangalore; June 21 - 22, 2019). Karan Saini participated in the event.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance"&gt;Internet Governance&lt;/a&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;As part of its research on privacy and free speech, CIS is engaged with two different projects. The first one (under a grant from Privacy International and IDRC) is on surveillance and freedom of expression (SAFEGUARDS). The second one (under a grant from MacArthur Foundation) is on restrictions that the Indian government has placed on freedom of expression online.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Privacy&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Blog Entry&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/blog/akriti-bopanna-and-gurshabad-grover-july-3-2019-impact-of-consolidation-in-the-internet-economy-on-the-evolution-of-the-internet"&gt;The Impact of Consolidation in the Internet Economy on the Evolution of the Internet&lt;/a&gt; (Akriti Bopanna and Gurshabad Grover; July 3, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Event Organized&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/setting-the-agenda-a-behavioural-science-approach-to-data-privacy"&gt;Setting the Agenda: A Behavioural Science approach to Data Privacy&lt;/a&gt; (Organized by Centre for Social Behaviour Change; Ashoka University and the Busara Center for Behavioral Economics, Vasant Vihar, New Delhi). Amber Sinha attended the meeting.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/pranesh-prakash-as-resource-person-for-itd-seminar-on-competition"&gt;Pranesh Prakash as Resource Person for ITD seminar on Competition&lt;/a&gt;&lt;span&gt; (Organized by International Institute for Trade and Development; Bangkok; June 24 - 26, 2019). Pranesh Prakash was also a speaker in the session on Consumer Protection and Digital Rights- Defining Welfare and Fair Competition.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;span&gt;Cyber Security&lt;/span&gt;&lt;/h3&gt;
&lt;div&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/news/the-global-nature-of-cybersecurity-in-a-changing-world"&gt;The Global Nature of Cybersecurity in a Changing World&lt;/a&gt; (Organized by Hewlett Foundation; San Diego; June 20 - 22, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions"&gt;Workshop on Cyber-Ethics: Values-driven Innovative Solutions&lt;/a&gt; (Organized by Embassy of Switzerland; Bangalore; June 28, 2019). Arindrajit Basu moderated a discussion on Cyber-Ethics.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3&gt;Free Speech&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Participation in Events&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/icann-masterclass"&gt;ICANN Masterclass&lt;/a&gt; (Organized by ICANN, Bangalore; June 19, 2019). Akriti Bopanna attended the event.&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/news/icann-65"&gt;ICANN 65&lt;/a&gt; (Organized by ICANN; Morocco; June 24 - 27, 2019). Akriti Bopanna attended the event.&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;h2&gt;&lt;a class="external-link" href="https://cis-india.org/telecom"&gt;Telecom&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The growth in telecommunications in India has been impressive. While the potential for growth and returns exist, a range of issues need to be addressed for this potential to be realized. One aspect is more extensive rural coverage and the second aspect is a countrywide access to broadband which is low at about eight million subscriptions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Article&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/telecom/blog/shyam-ponappa-business-standard-june-6-2019-5g-aspirations-and-realities"&gt;5G Aspirations and Realities&lt;/a&gt; (Shyam Ponappa; Organizing India Blogspot; June 6, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a class="external-link" href="https://cis-india.org/raw"&gt;Researchers at Work (RAW)&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Researchers at Work (RAW) programme is an interdisciplinary research initiative driven by an emerging need to understand the reconfigurations of social practices and structures through the Internet and digital media technologies, and vice versa. It aims to produce local and contextual accounts of interactions, negotiations, and resolutions between the Internet, and socio-material and geo-political processes:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Essay&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://cis-india.org/raw/sadaf-khan-data-bleeding-everywhere-a-story-of-period-trackers"&gt;Data bleeding everywhere: a story of period trackers&lt;/a&gt; (Sadaf Khan; June 11, 2019). This was written for and published as part of the Bodies of Evidence collection of Deep Dives.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Participation in Event&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;a class="external-link" href="https://cis-india.org/raw/unpacking-video-based-surveillance-in-new-delhi-urban-data-justice"&gt;Unpacking video-based surveillance in New Delhi&lt;/a&gt; (Organized by the University of Manchester, England; June 14, 2019). Aayush Rathi and Ambika Tandon presented their research on 'Urban Data, Inequality and Justice in the Global South'.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Blog Entries&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="external-link" href="https://medium.com/rawblog/patreon-understanding-the-intersection-of-art-artiste-and-labour-baaf79eb9b04"&gt;Patreon: Understanding the Intersection of Art, Artiste, and Labour&lt;/a&gt; (Upasana Bhattacharjee; June 1, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://medium.com/rawblog/archivesforstorytelling-38af721b6d7e"&gt;#ArchivesForStorytelling&lt;/a&gt; (Aliyeh Rizvi, Bhanu Prakash, Dinesh, Malini Ghanathe, and Venkat Srinivasan; June 1, 2019).&lt;/li&gt;
&lt;li&gt;&lt;a class="external-link" href="https://medium.com/rawblog/digitalidentities-7914ea5a7ea0"&gt;#DigitalIdentities&lt;/a&gt; (Anjali K Mohan, Harish Boya, Janaki Srinivasan, Khetrimayum Monish Singh, and Sarita Seshagiri; June 11, 2019).&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;&lt;a class="external-link" href="http://cis-india.org/"&gt;About CIS&lt;/a&gt;&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The Centre for Internet and  Society  (CIS) is a non-profit organisation that undertakes  interdisciplinary  research on internet and digital technologies from  policy and academic  perspectives. The areas of focus include digital  accessibility for  persons with disabilities, access to knowledge,  intellectual property  rights, openness (including open data, free and  open source software,  open standards, open access, open educational  resources, and open  video), internet governance, telecommunication  reform, digital privacy,  and cyber-security. The academic research at  CIS seeks to understand  the reconfigurations of social and cultural  processes and structures as  mediated through the internet and digital  media technologies.&lt;/p&gt;
&lt;p&gt;► Follow us elsewhere&lt;/p&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li&gt;Twitter:&lt;a href="http://twitter.com/cis_india"&gt; http://twitter.com/cis_india&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Access to Knowledge: &lt;a href="https://twitter.com/CISA2K"&gt;https://twitter.com/CISA2K&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Twitter - Information Policy: &lt;a href="https://twitter.com/CIS_InfoPolicy"&gt;https://twitter.com/CIS_InfoPolicy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Facebook - Access to Knowledge:&lt;a href="https://www.facebook.com/cisa2k"&gt; https://www.facebook.com/cisa2k&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;E-Mail - Access to Knowledge: &lt;a&gt;a2k@cis-india.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;E-Mail - Researchers at Work: &lt;a&gt;raw@cis-india.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;List - Researchers at Work: &lt;a href="https://lists.ghserv.net/mailman/listinfo/researchers"&gt;https://lists.ghserv.net/mailman/listinfo/researchers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;p&gt;► Support Us&lt;/p&gt;
&lt;div&gt;Please help us defend consumer and citizen rights on the Internet!   Write a cheque in favour of 'The Centre for Internet and Society' and   mail it to us at No. 194, 2nd 'C' Cross, Domlur, 2nd Stage, Bengaluru -   5600 71.&lt;/div&gt;
&lt;p&gt;► Request for Collaboration&lt;/p&gt;
&lt;div&gt;
&lt;p style="text-align: justify; "&gt;We invite researchers, practitioners,  artists, and theoreticians,  both organisationally and as individuals,  to engage with us on topics  related internet and society, and improve  our collective understanding  of this field. To discuss such  possibilities, please write to Sunil  Abraham, Executive Director, at sunil@cis-india.org (for policy research), or Sumandro Chattapadhyay, Research Director, at sumandro@cis-india.org  (for  academic research), with an indication of the form and the  content of  the collaboration you might be interested in. To discuss  collaborations  on Indic language Wikipedia projects, write to Tanveer  Hasan, Programme  Officer, at &lt;a&gt;tanveer@cis-india.org&lt;/a&gt;.&lt;/p&gt;
&lt;div style="text-align: justify; "&gt;&lt;i&gt;CIS is grateful to its primary  donor the Kusuma Trust founded  by Anurag Dikshit and Soma Pujari,  philanthropists of Indian origin for  its core funding and support for  most of its projects. CIS is also  grateful to its other donors,  Wikimedia Foundation, Ford Foundation,  Privacy International, UK, Hans  Foundation, MacArthur Foundation, and  IDRC for funding its various  projects&lt;/i&gt;.&lt;/div&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/about/newsletters/june-2019-newsletter'&gt;https://cis-india.org/about/newsletters/june-2019-newsletter&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Access to Knowledge</dc:subject>
    

   <dc:date>2019-07-16T16:00:11Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions">
    <title>Workshop on Cyber-Ethics: Values-driven Innovative Solutions</title>
    <link>https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions</link>
    <description>
        &lt;b&gt;Arindrajit Basu moderated a discussion on Cyber-Ethics at Swiss Nex (Consulate General of Switzerland, Bangalore on 28 June 2019.  The event was organized by the Embassy of Switzerland.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Cyber-space – the virtual reality – influences all countries in the world and all sectors of society. The cyber-world of e-mails, e-commerce, e-government, e-education, e-music, e-prosecutors, artificial intelligence, crypto-currencies are daily reality, with new opportunities. On the other hand, cyber-bullying, cyber-criminality, cyber-security, cyber-war etc. are great challenges.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Cyber-ethics looks for values-driven innovative solutions to these challenges and opportunities between freedom and privacy, security and peace. Switzerland is a world leader in innovation, India is a world leader in information technologies. How can both countries strengthen ethical, values-driven solutions for the cyber-world? Indian and Swiss Experts present challenges and solutions.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Programme&lt;/h3&gt;
&lt;p class="Standard"&gt;10.00     Registration &amp;amp; welcome tea n coffee&lt;/p&gt;
&lt;p class="Standard"&gt;10:30     &lt;b&gt;Welcome remarks&lt;/b&gt;&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;b&gt;Mr.Sebastien Hug&lt;/b&gt;, CEO, swissnex India and Consul General of Switzerland&lt;/p&gt;
&lt;p class="Standard"&gt;10:35     &lt;b&gt;Keynote address: Cyber-Ethics between Global Values and Contextual Interests&lt;/b&gt;&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;b&gt;Prof. Dr. H.C. Christoph Stückelberger&lt;/b&gt;, Founder and President of Globethics.net, Visiting Professor of Ethics in Nigeria, Russia, China&lt;/p&gt;
&lt;p class="Standard"&gt;11:05      &lt;b&gt;Moderated panel discussion&lt;/b&gt;&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;i&gt;Moderator&lt;/i&gt;: &lt;b&gt;Arindrajit Basu, &lt;/b&gt;Senior Policy Officer, Center for Internet and Society,&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;i&gt;Panelists&lt;/i&gt;:&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;b&gt;Dr. Pavan Duggal&lt;/b&gt;, Founder and President of the International Commission on Cyber Security Law, Advocate at Supreme Court of India&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;b&gt;Dr Siobhán Martin&lt;/b&gt;, Deputy Head, Leadership, Crisis and Conflict Management, Geneva Centre for Security Policy&lt;/p&gt;
&lt;p class="Standard"&gt;&lt;b&gt;Mr Sameer Chothani&lt;/b&gt;, Managing Director - Group Technology, India, UBS&lt;/p&gt;
&lt;p class="Standard"&gt;12:15     Q&amp;amp;A&lt;/p&gt;
&lt;p class="Standard"&gt;12:45     Networking lunch&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions'&gt;https://cis-india.org/internet-governance/news/workshop-on-cyber-ethics-values-driven-innovative-solutions&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Cyber Security</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-06T00:51:20Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/icann-65">
    <title>ICANN 65</title>
    <link>https://cis-india.org/internet-governance/news/icann-65</link>
    <description>
        &lt;b&gt;Akriti Bopanna attended ICANN 65 in Marrakech, Morocco from 24 - 27 June 2019. &lt;/b&gt;
        &lt;div id="_mcePaste"&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;Akriti spoke on ICANN and Human Rights at a session organized by the At-Large and Non-Commercial Users Constituency.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;The Government Advisory Council discussed how government representatives can get involved in the Human Rights Impact Assessment work which the working party that she co-chairs on Human Rights at ICANN has been conducting. Akriti spoke on the feasibility of organizing a High Interest Session on Human Rights at ICANN66.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;Akriti participated in a public meeting of ICANN's Board on their Anti-Harassment Policy and my suggestions/remarks on improving the samte were received well.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/icann-65'&gt;https://cis-india.org/internet-governance/news/icann-65&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>ICANN</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-07-06T01:08:36Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
