<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 31 to 45.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/news/tech-dirt-june-8-2013-indian-govt-quietly-brings-central-monitoring-system"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/livemint-anirban-sen-june-29-2013-issue-of-duplication-of-identities-of-users-under-control"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/times-of-india-javed-anwer-june-9-2013-facebook-google-deny-spying-access"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/times-of-india-june-22-2013-kim-arora-cyber-experts-suggest-open-source-software-to-protect-privacy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/time-world-anjan-trivedi-june-30-2013-in-india-prison-like-surveillance-slips-under-the-radar"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/report-on-cis-workshop-at-igf"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/regulation-of-cross-border-transfers-of-personal-data-in-asia"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/economic-times-june-10-2018-sunil-abraham-why-npci-and-facebook-need-urgent-regulatory-attention"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/a2k/blogs/ace-7-french-charter-cis-comment"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/hindustan-times-pranesh-prakash-april-3-2017-aadhaar-marks-a-fundamental-shift-in-citizen-state-relations"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/the-times-of-india-april-6-2017-umesh-yadav-bengaluru-cops-twitter-handle-in-ethical-storm"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/india-today-neha-vashishth-april-6-2017-privacy-what-bengaluru-police-leaks-phone-numbers-on-twitter"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/news/tech-dirt-june-8-2013-indian-govt-quietly-brings-central-monitoring-system">
    <title>Indian Government Quietly Brings In Its 'Central Monitoring System': Total Surveillance Of All Telecommunications</title>
    <link>https://cis-india.org/news/tech-dirt-june-8-2013-indian-govt-quietly-brings-central-monitoring-system</link>
    <description>
        &lt;b&gt;There's a worrying trend around the world for governments to extend online surveillance capabilities to encompass all citizens -- often justified with the usual excuse of combatting terrorism and/or child pornography.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p&gt;The blog post was &lt;a class="external-link" href="https://www.techdirt.com/articles/20130508/09302923002/indian-government-quietly-brings-its-central-monitoring-system-total-surveillance-all-communications.shtml"&gt;published in &lt;b&gt;tech dirt&lt;/b&gt;&lt;/a&gt; on June 8, 2013. Pranesh Prakash is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The latest to join this unhappy club is India, which has put in place what sounds like &lt;a href="http://timesofindia.indiatimes.com/tech/tech-news/internet/Government-can-now-snoop-on-your-SMSs-online-chats/articleshow/19932484.cms"&gt;a massively intrusive system&lt;/a&gt;, as this article from The Times of India makes clear:&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;&lt;i&gt;The government last month quietly began rolling out a  project that gives it access to everything that happens over India's  telecommunications network -- online activities, phone calls, text  messages and even social media conversations. Called the Central  Monitoring System, it will be the single window from where government  arms such as the National Investigation Agency or the tax authorities  will be able to monitor every byte of communication.&lt;/i&gt;&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;This project has been under development for two years, but in almost total secrecy:  &lt;i&gt; &lt;/i&gt;&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;&lt;i&gt;"In the absence of a strong privacy law that promotes  transparency about surveillance and thus allows us to judge the utility  of the surveillance, this kind of development is very worrisome," warned  Pranesh Prakash, director of policy at the Centre for Internet and  Society. "Further, this has been done with neither public nor  parliamentary dialogue, making the government unaccountable to its  citizens."&lt;/i&gt;&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;&lt;i&gt; &lt;/i&gt; That combination of total surveillance and zero transparency is a  dangerous one, providing the perfect tool for monitoring and controlling  political and social dissent.  If India wishes to maintain its claim to  be "the world's largest democracy", its government would do well to  introduce some safeguards against abuse of the new system, such as  strong privacy laws, as well as engaging the Indian public in an open  debate about &lt;a href="https://cis-india.org/internet-governance/blog/indias-big-brother-the-central-monitoring-system"&gt;what exactly such extraordinary surveillance powers might be used for&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/tech-dirt-june-8-2013-indian-govt-quietly-brings-central-monitoring-system'&gt;https://cis-india.org/news/tech-dirt-june-8-2013-indian-govt-quietly-brings-central-monitoring-system&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2013-07-02T09:12:49Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/livemint-anirban-sen-june-29-2013-issue-of-duplication-of-identities-of-users-under-control">
    <title>Issue of duplication of identities of users under control: Nilekani</title>
    <link>https://cis-india.org/news/livemint-anirban-sen-june-29-2013-issue-of-duplication-of-identities-of-users-under-control</link>
    <description>
        &lt;b&gt;Nandan Nilekani says UIDAI system almost completely accurate, duplication of identities virtually negligible.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p&gt;The article by Anirban Sen was &lt;a class="external-link" href="http://www.livemint.com/Politics/jgihdb9IkoT0ui0sC2viIM/Issue-of-duplication-of-identities-of-users-under-control-N.html"&gt;published in Livemint&lt;/a&gt; on June 29, 2013. Sunil Abraham is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The Unique Identification Authority of India (UIDAI) chief &lt;span class="person"&gt;&lt;a href="http://www.livemint.com/Search/Link/Keyword/Nandan%20Nilekani"&gt;Nandan Nilekani&lt;/a&gt;&lt;/span&gt; said the government agency was in preliminary discussions with some  embassies to use the Aadhaar project to simplify visa application  procedures and that the issue of duplication of identities of users was  well under control.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;In March, a UIDAI spokesperson told &lt;i&gt;Mint&lt;/i&gt; that it  had detected 34,015 cases where one person had been issued two Aadhaar  numbers. The figures represented a little over 0.01% of the 290 million  people who had been enrolled at the time.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;Nilekani, who was delivering a keynote address at a  three-day conference on the success and failures of information  technology (IT) in the public and private sector at the Indian Institute  of Management in Bangalore, said the UIDAI system was almost completely  accurate and duplication of identities was virtually negligible.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;“Knowing what we know now, we believe we have accuracy of  upto 99.99%,” said Nilekani, chairman of the Unique Identification  Authority of India (UIDAI).&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;Nilekani, on Saturday, assured that the project was  completely secure and user data and biometrics were safe in the hands of  the agencies it works with and brushed aside any concerns on security  of user data that have been widely raised by Internet security groups  and activists.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;“We’re not giving any access to data, except when it is  resident authorized. It is shared only when a resident participates in a  transaction and authorizes the data which is shared,” said Nilekani,  who was one of the seven co-founders of India’s second largest software  exporter &lt;span class="company"&gt;&lt;a href="http://www.livemint.com/Search/Link/Keyword/Infosys%20Ltd"&gt;Infosys Ltd&lt;/a&gt;&lt;/span&gt;. He served as CEO of Infosys from 2002 to 2007.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;“The system is also not open to the internet—the system  has rings of authentications of service agencies. There are lots of  concentric rings of security,” he added. “The biometric data is not used  except for enrolment, re-duplication and authentication.”&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;Internet rights groups and activists such as &lt;span class="person"&gt;&lt;a href="http://www.livemint.com/Search/Link/Keyword/Sunil%20Abraham"&gt;Sunil Abraham&lt;/a&gt;&lt;/span&gt; of the Centre for Internet and Society (CIS), a research thinktank that  focuses on issues of Internet governance, have often raised concerns  over UID’s overtly broad scope and privacy issues in the project.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;“We don’t need Aadhaar because we already have a much  more robust identity management and authentication system based on  digital signatures that has a proven track record of working at a  “billions-of-users” scale on the Internet with reasonable security. The  Unique Identification (UID) project based on the so-called  “infallibility of biometrics” is deeply flawed in design. These design  disasters waiting to happen cannot be permanently thwarted by band-aid  policies,” Abraham wrote in a blog post on the CIS website last year.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;Nilekani also acknowledged that the department had faced  several challenges, due to the sheer scale of the project that aims to  cover the country’s entire population of 1.2 billion.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;“We have had lots of challenges on this project—we have  backlogs of enrolment because we have more packets than we can process,  we backlogs of letter deliveries because we cannot handle so many  letters…but fundamentally notwithstanding those challenges, we believe  we are on the right track,” said Nilekani.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;Both UIDAI and the census department under the National  Population Register project are recording biometric data, which includes  fingerprint and iris data. Even though both the agencies reached a  truce after a cabinet decision in January 2012 and were allowed to  co-exist, there have been several reports of duplication between the two  agencies in biometric collection.&lt;/p&gt;
&lt;p class="mceContentBody documentContent" style="text-align: justify; "&gt;UIDAI is not just being used as the main platform for  rolling out the government’s direct cash transfer scheme, but is also  being regarded as an important authentication scheme for financial  transactions and other security measures.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/livemint-anirban-sen-june-29-2013-issue-of-duplication-of-identities-of-users-under-control'&gt;https://cis-india.org/news/livemint-anirban-sen-june-29-2013-issue-of-duplication-of-identities-of-users-under-control&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2013-07-02T10:13:10Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/times-of-india-javed-anwer-june-9-2013-facebook-google-deny-spying-access">
    <title>Facebook, Google deny spying access</title>
    <link>https://cis-india.org/news/times-of-india-javed-anwer-june-9-2013-facebook-google-deny-spying-access</link>
    <description>
        &lt;b&gt;The CEOs of Facebook and Google on Saturday categorically denied that the US National Security Agency had "direct access" to their company servers for snooping on Gmail and Facebook users. But both acknowledged that the companies complied with the 'lawful' requests made by the US government and shared user data with sleuths.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The article by Javed Anwer was &lt;a class="external-link" href="http://articles.timesofindia.indiatimes.com/2013-06-09/internet/39849496_1_facebook-ceo-mark-zuckerberg-user-data-ceo-larry-page"&gt;published in the Times of India&lt;/a&gt; on June 9, 2013. Pranesh Prakash is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In a post titled "What the ...?" Google's official blog, CEO &lt;a href="http://timesofindia.indiatimes.com/topic/Larry-Page"&gt;Larry Page&lt;/a&gt; wrote, "We have not joined any program that would give the US  governmentâ€”or any other governmentâ€”direct access to our servers. We  had not heard of a program called PRISM until yesterday."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A few hours later, Facebook CEO &lt;a href="http://timesofindia.indiatimes.com/topic/Mark-Zuckerberg"&gt;Mark Zuckerberg&lt;/a&gt; responded. "Facebook is not and has never been part of any program to  give the US or any other government direct access to our servers... We  hadn't even heard of PRISM before yesterday," he wrote on his page at  the social media site.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to a few PowerPoint slides  allegedly leaked by an NSA official, nine technology companies - Google,  AOL, Apple, Yahoo, Microsoft, Skype, Facebook, YouTube and PalTalk -  are providing the US government easy access to user data. While all  companies have denied being part anything called PRISM, Facebook and  Google have been most vocal about it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A few hours after Facebook  and Google statements, the New York Times said in a report that  technology companies had "opened discussions with national security  officials about developing technical methods to more efficiently and  securely share the personal data of foreign users".&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"In some cases, they (companies) changed their computer systems to do so," noted the NYT report.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The statements by the CEOs have done little to allay privacy fears.  "The denials from the companies look highly coordinated, including  similar phrases in all their responses. I don't think they are lying  outright, though the NYT report suggests that they are telling a  half-truth. They may not provide the US government 'direct access' to  all their servers, but may be providing indirect access, or may just be  responding to very broad FISA orders," said Pranesh Prakash, a policy  director with Centre for Internet and Society in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On Friday US president &lt;a href="http://timesofindia.indiatimes.com/topic/Barack-Obama"&gt;Barack Obama&lt;/a&gt; had tacitly acknowledged NSA surveillance programmes aimed at non-US  citizens. "You can't have a hundred per cent security and also then have  a hundred per cent privacy and zero inconvenience. You know, we're  going to have to make some choices as a society," he told reporters in  the US.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Page and Zuckerberg also called on the governments to be  more open about surveillance programmes. "The level of secrecy around  the current legal procedures undermines the freedoms we all cherish,"  wrote Page.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Added Zuckerberg, "We strongly encourage all  governments to be much more transparent about all programs aimed at  keeping the public safe. It's the only way to protect everyone's civil  liberties and create the safe and free society we all want over the long  term."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/times-of-india-javed-anwer-june-9-2013-facebook-google-deny-spying-access'&gt;https://cis-india.org/news/times-of-india-javed-anwer-june-9-2013-facebook-google-deny-spying-access&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Public Accountability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    

   <dc:date>2013-07-02T10:18:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/times-of-india-june-22-2013-kim-arora-cyber-experts-suggest-open-source-software-to-protect-privacy">
    <title>Cyber experts suggest using open source software to protect privacy</title>
    <link>https://cis-india.org/news/times-of-india-june-22-2013-kim-arora-cyber-experts-suggest-open-source-software-to-protect-privacy</link>
    <description>
        &lt;b&gt;Big Brother is watching. With the Central Monitoring System (CMS) at home and PRISM from the US, millions of users worldwide have become vulnerable to online surveillance by state agencies without even realizing it. No surprise, several cyber security experts feel that building one's own personal firewall is a good way of fortifying online privacy.&lt;/b&gt;
        &lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The article by Kim Arora was &lt;a class="external-link" href="http://articles.timesofindia.indiatimes.com/2013-06-22/internet/40133453_1_source-software-cyanogenmod-encryption"&gt;published in the Times of India&lt;/a&gt; on June 22, 2013. Sunil Abraham is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;One enterprising netizen has compiled a list of services, from social &lt;a href="http://timesofindia.indiatimes.com/topic/Ne%28x%29tworks"&gt;networks&lt;/a&gt; to email clients, and even web browsers, that offer better protection  from surveillance. They are listed on a web page called prism-break.org.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When asked about steps that a digital native can take to protect his  privacy and online data, Sunil Abraham, executive director of  Bangalore-based non-profit Center for Internet and Society said, "Stop  using proprietary software, shift to free/open source software for your  operating system and applications on your computer and phone. &lt;a href="http://timesofindia.indiatimes.com/topic/Android"&gt;Android&lt;/a&gt; is not sufficiently free; shift to CyanogenMod. Encrypt all sensitive  Internet traffic and email using software like TOR and GNU Privacy  Guard. Use community based infrastructure such as Open Street Maps and  Wikipedia. Opt for alternatives to mainstream services. For example,  replace Google Search with DuckDuckGo."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Use of licensed or  proprietary software, which bind users legally when it comes to use and  distribution, seems to be losing favour among an informed niche. While  alternative software cannot offer absolute protection, it is being seen  as a "better-than-nothing" option. Anonymisers like TOR, though also not  entirely foolproof, are also a popular option among those who wish to  keep their web usage untraceable. Once installed on a browser,  anonymisers can hide the route that digital traffic takes when sent from  your computer over a network before emerging at an end node.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is one caveat, though. Some websites can deny service to users  operating on certain anonymising networks. Also, anonymisers are known  to reduce browsing speeds. In India, where broadband speeds are already  abysmally low, anything that slows one down even further would find  popularity hard to come by.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Computer and network security expert Aseem Jakhar too recommends  open source software since they offer the convenience of customization  to suit one's encryption needs and are able to verify the source code.  For laypersons, there are other tools. "One can use anonymisers like TOR  which encrypt your communication and hide your identity. With these it  becomes very difficult to exactly locate the source. For email clients,  it is best to use ones that offer end-to-end strong encryption," he  says. Jakhar, co-founder of open security community "null", also  recommends the use of customized and &lt;a href="http://timesofindia.indiatimes.com/topic/Linux"&gt;Linux&lt;/a&gt; systems for more advanced users. Default Linux distributions, he points  out, may have free online services which can again be analysed by the  governments.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The home-bred CMS programme seeks to directly  procure data pertaining to call records and internet usage for  intelligence purposes without going through telecom service providers.  There were fears of abuse when information about the programme, kept  under strict wraps by the government, trickled in. Department of Telecom  and Ministry of IT and Communication have been reticent about the state  of implementation of the 400-crore rupees programme.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;PRISM, a similar, international monitoring programme mounted by the US  and revealed to the world by the US National Security Authority  whistleblower Edward &lt;a href="http://timesofindia.indiatimes.com/topic/Snowden-%28musician%29"&gt;Snowden&lt;/a&gt;, has raised concerns of safeguarding digital information the world over.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/times-of-india-june-22-2013-kim-arora-cyber-experts-suggest-open-source-software-to-protect-privacy'&gt;https://cis-india.org/news/times-of-india-june-22-2013-kim-arora-cyber-experts-suggest-open-source-software-to-protect-privacy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2013-07-03T04:32:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/time-world-anjan-trivedi-june-30-2013-in-india-prison-like-surveillance-slips-under-the-radar">
    <title>In India, Prism-like Surveillance Slips Under the Radar</title>
    <link>https://cis-india.org/news/time-world-anjan-trivedi-june-30-2013-in-india-prison-like-surveillance-slips-under-the-radar</link>
    <description>
        &lt;b&gt;Prism, the contentious U.S. data-collection surveillance program, has captured the world’s attention ever since whistle-blower Edward Snowden leaked details of global spying to the Guardian and Washington Post.

&lt;/b&gt;
        &lt;p&gt;The article by Anjan Trivedi was &lt;a class="external-link" href="http://world.time.com/2013/06/30/in-india-prism-like-surveillance-slips-under-the-radar/#ixzz2XoCbrn00"&gt;published in Time World &lt;/a&gt;on June 30, 2013. Sunil Abraham is quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;However, it turns out &lt;a href="http://topics.time.com/india/"&gt;India&lt;/a&gt;,  the world’s largest democracy, is building its own version to monitor  internal communications in the name of national security. Yet India’s  Central Monitoring System, or CMS, was not shrouded in secrecy — New  Delhi &lt;a href="http://www.dot.gov.in/sites/default/files/AR%20Englsih%2011-12_0.pdf"&gt;announced&lt;/a&gt; its intentions to watch over its citizens, however mutedly, in &lt;a href="http://pib.nic.in/newsite/erelease.aspx?relid=70747"&gt;2011&lt;/a&gt;, and rollout is slated for August. And while reports that the American system collected 6.3 billion &lt;a href="http://www.guardian.co.uk/world/2013/jun/08/nsa-boundless-informant-global-datamining"&gt;intelligence reports&lt;/a&gt; in India led to a &lt;a href="http://m.indianexpress.com/news/supreme-court-agrees-to-hear-pil-on-us-surveillance-of-internet-data/1131011/"&gt;lawsuit&lt;/a&gt; at the nation’s &lt;a href="http://topics.time.com/supreme-court/"&gt;Supreme Court&lt;/a&gt;, comparable indignation has been conspicuously lacking with the domestic equivalent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CMS is an ambitious surveillance system that monitors text messages,  social-media engagement and phone calls on landlines and cell phones,  among other communications. That means 900 million landline and  cell-phone users and 125 million Internet users. The project, which is  being implemented by the government’s &lt;a href="http://www.cdot.in/about_us/berif_history.htm"&gt;Centre for Development of Telematics&lt;/a&gt; (&lt;a href="http://pib.nic.in/newsite/erelease.aspx?relid=78145"&gt;C-DOT&lt;/a&gt;),  is meant to help national law-enforcement agencies save time and avoid  manual intervention, according to the Department of Telecommunications’ &lt;a href="http://www.dot.gov.in/sites/default/files/Telecom%20Annual%20Report-2012-13%20%28English%29%20_For%20web%20%281%29.pdf"&gt;annual report&lt;/a&gt;.  This has been in the works since 2008, when C-DOT started working on a  proof-of-concept, according to an older report. The government &lt;a href="http://planningcommission.nic.in/aboutus/committee/wrkgrp12/cit/wgrep_telecom.pdf"&gt;set aside&lt;/a&gt; approximately $150 million for the system as part of its 12th five-year  plan, although the Cabinet ultimately approved a higher amount.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Within the internal-security ministry though, the surveillance system  remains a relatively “hush-hush” topic, a project official unauthorized  to speak to the press tells TIME. In April 2011, the Police  Modernisation Division of the Home Affairs Ministry put out a 90-page  tender to solicit bidders for communication-interception systems in  every state and union territory of India. The system requirements  included “live listening, recording, storage, playback, analysis,  postprocessing” and voice recognition.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Civil-liberties groups concede that states often need to undertake  targeted-monitoring operations. However, the move toward extensive  “surveillance capabilities enabled by digital communications,” suggests  that governments are now “casting the net wide, enabling intrusions into  private lives,” according to Meenakshi Ganguly, South Asia director for  Human Rights Watch. This extensive communications surveillance through  the likes of Prism and CMS are “out of the realm of judicial  authorization and allow unregulated, secret surveillance, eliminating  any transparency or accountability on the part of the state,” a recent  U.N. &lt;a href="http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A.HRC.23.40_EN.pdf"&gt;report&lt;/a&gt; stated.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India is no stranger to censorship and monitoring — tweets, blogs,  books or songs are frequently blocked and banned. India ranked second  only to the U.S. on Google’s list of user-data requests with 4,750  queries, up &lt;a href="http://www.google.com/transparencyreport/userdatarequests/IN/"&gt;52% from two years back&lt;/a&gt;, and removal requests from the government &lt;a href="http://www.google.com/transparencyreport/removals/government/IN/?metric=items&amp;amp;p=2012-12"&gt;increased by 90%&lt;/a&gt; over the previous reporting period. While these were largely made  through police or court orders, the new system will not require such a  legal process. In recent times, India’s democratically elected  government has barred access to certain websites and Twitter handles,  restricted the number of outgoing text messages to five per person per  day and arrested citizens for liking Facebook posts and tweeting.  Historically too, censorship has been India’s preferred means of  policing social unrest. “Freedom of expression, while broadly available  in theory,” Ganguly tells TIME, “is endangered by abuse of various India  laws.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is a growing discrepancy and power imbalance between citizens  and the state, says Anja Kovacs of the Internet Democracy Project. And,  in an environment like India where “no checks and balances [are] in  place,” that is troubling. The potential for misuse and  misunderstanding, Kovacs believes, is increasing enormously. Currently,  India’s laws relevant to interception “disempower citizens by relying  heavily on the executive to safeguard individuals’ constitutional  rights,” a recent &lt;a href="http://www.indianexpress.com/news/way-to-watch/1133737/0"&gt;editorial&lt;/a&gt; noted. The power imbalance is often noticeable at public protests, as  in the case of the New Delhi gang-rape incident in December, when the  government shut down public transport near protest grounds and  unlawfully detained demonstrators.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With an already sizeable and growing population of Internet users,  the government’s worries too are on the rise. Netizens in India are set  to triple to 330 million by 2016, &lt;a href="http://startupcatalyst.in/wp-content/uploads/2013/05/From_Buzz_to_Bucks_Apr_2013_tcm80-132875.pdf"&gt;according to a recent report&lt;/a&gt;.  “As [governments] around the world grapple with the power of social  media that can enable spontaneous street protests, there appears to be  increasing surveillance,” Ganguly explains.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India’s junior minister for telecommunications attempted to explain the benefits of this system during a &lt;a href="http://www.youtube.com/watch?v=rwTsek5WUfE"&gt;recent Google+ Hangout&lt;/a&gt; session. He acknowledged that CMS is something that “most people may  not be aware of” because it’s “slightly technical.” A participant noted  that the idea of such an intrusive system was worrying and he did not  feel safe. The minister, though, insisted that it would “safeguard your  privacy” and national security. Given the high-tech nature of CMS, he  noted that telecom companies would no longer be part of the government’s  surveillance process. India currently does &lt;a href="http://www.hrw.org/news/2013/06/07/india-new-monitoring-system-threatens-rights"&gt;not&lt;/a&gt; have formal privacy legislation to prohibit arbitrary monitoring. The new system comes under the &lt;a href="http://pib.nic.in/newsite/erelease.aspx?relid=71791"&gt;jurisdiction&lt;/a&gt; of the Indian Telegraph Act of 1885, which allows for monitoring communication in the “interest of public safety.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The surveillance system is not only an “abuse of privacy rights and  security-agency overreach,” critics say, but also counterproductive in  terms of security. In the process of collecting data to monitor criminal  activity, the data itself may become a target for terrorists and  criminals — a “honeypot,” according to Sunil Abraham, executive director  of India’s Centre for Internet and Society. Additionally, the  wide-ranging tapping undermines financial markets, Abraham says, by  compromising confidentiality, trade secrets and intellectual property.  What’s more, vulnerabilities will have to be built into the existing  cyberinfrastructure to make way for such a system. Whether the nation’s  patchy infrastructure will be able to handle a complex web of  surveillance and networks, no one can say. That, Abraham contends, is  what attackers will target.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;National security has widely been cited as the reason for this  system, but no one can say whether it will actually help avert terrorist  activity. India’s own 9/11 is a case in point: the Indian government  was handed intelligence by foreign agencies about the possibility of the  2008 Mumbai terrorist attacks, but did not act. This is a “clear  indication that having access to massive amounts of data is not  necessarily going to make people safer,” Kovacs tells TIME. However,  officers familiar with the new system say it will not increase  surveillance or enhance intrusion beyond current levels; it will only  strengthen the policy framework of privacy and increase &lt;a href="http://pib.nic.in/newsite/erelease.aspx?relid=80829"&gt;operational efficiency&lt;/a&gt;.  Spokespersons and officials in the internal-security and telecom  departments did not respond to requests or declined to comment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government has been cagey about details on implementation and &lt;a href="http://pib.nic.in/newsite/PrintRelease.aspx?relid=70791"&gt;extent&lt;/a&gt;.  This ability to act however the authorities deems fit “just makes it  really easy to slide into authoritarianism, and that is not acceptable  for any democratic country,” Kovacs says. Indeed, India has seen that  before — almost four decades ago, Indira Gandhi declared a state of  emergency for 19 months, which suspended all civil liberties. Indians  complaining about Prism may want to look a little closer to home.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/time-world-anjan-trivedi-june-30-2013-in-india-prison-like-surveillance-slips-under-the-radar'&gt;https://cis-india.org/news/time-world-anjan-trivedi-june-30-2013-in-india-prison-like-surveillance-slips-under-the-radar&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Surveillance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2013-07-03T09:31:18Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/report-on-cis-workshop-at-igf">
    <title>Report on CIS' Workshop at the IGF:'An Evidence Based Framework for Intermediary Liability'</title>
    <link>https://cis-india.org/internet-governance/report-on-cis-workshop-at-igf</link>
    <description>
        &lt;b&gt;An evidence based framework for intermediary liability' was organised to present evidence and discuss ongoing research on the changing definition, function and responsibilities of intermediaries across jurisdictions.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The discussion from the workshop will contribute to a comprehensible framework for liability, consistent with the capacity of the intermediary and with international human-rights standards.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Electronic Frontier Foundation (USA), Article 19 (UK) and Centre for Internet and Society (India) have come together towards the development of best practices and principles related to the regulation of online content through intermediaries. The nine principles are: Transparency, Consistency, Clarity, Mindful Community Policy Making, Necessity and Proportionality in Content Restrictions, Privacy, Access to Remedy, Accountability, and Due Process in both Legal and Private Enforcement. The workshop discussion will contribute to a comprehensible framework for liability that is consistent with the capacity of the intermediary and with international human-rights standards. The session was hosted by Centre for Internet and Society (India) and Centre for Internet and Society, Stanford (USA) and attended by 7 speakers and 40 participants.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Jeremy Malcolm, Senior Global Policy Analyst EFF kicked off the workshop highlighting the need to develop a liability framework for intermediaries that is derived out of an understanding of their different functions, their role within the economy and their impact on human rights. He went on to structure the discussion which would follow to focus on ongoing projects and examples that highlight central issues related to gathering and presenting evidence to inform the policy space.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Martin Husovec from the International Max Planck Research School for Competition and Innovation, began his presentation, tracking the development of safe harbour frameworks within social contract theory. Opining that safe harbour was created as a balancing mechanism between a return of investments of the right holders and public interest for Internet as a public space, he introduced emerging claims that technological advancement have altered this equilibrium. Citing injunctions and private lawsuits as instruments, often used against law abiding intermediaries, he pointed to the problem within existing liability frameoworks, where even intermediaries, who diligently deal with illegitimate content on their services, can be still subject to a forced cooperation to the benefit of right holders. He added that for liability frameworks to be effective, they must keep pace with advances in technology and are fair to right holders and the public interest.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He also pointed that in any liability framework because the ‘law’ that prescribes an interference, must be always sufficiently clear and foreseeable, as to both the meaning and nature of the applicable measures, so it sufficiently outlines the scope and manner of exercise of the power of interference in the exercise of the rights guaranteed. He illustrated this with the example of the German Federal Supreme Court attempts with Wi-Fi policy-making in 2010. He also raised issues of costs of uncertainty in seeking courts as the only means to balance rights as they often, do not have the necessary information. Similarly, society also does not benefit from open ended accountability of intermediaries and called for a balanced approach to regulation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The need for consistency in liability regimes across jurisdictions, was raised by Giancarlo Frosio, Intermediary Liability Fellow at Stanford's Centre for Internet and Society. He introduced the World Intermediary Liability Map, a project mapping legislation and case law across 70 countries towards creating a repository of information that informs policymaking and helps create accountability. Highlighting key takeaways from his research, he stressed the necessity of having clear definitions in the field of intermediary liability and the need to develop taxonomy of issues to deepen our understanding of the issues at stake towards an understanding of type of liability appropriate for a particular jurisdiction.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Nicolo Zingales, Assistant Professor of Law at Tilburg University highlighted the need for due process and safeguards for human rights and called for more user involvement in systems that are in place in different countries to respond to requests of takedown. Presenting his research findings, he pointed to the imbalance in the way notice and takedown regimes are structured, where content is taken down presumptively, but the possibility of restoring user content is provided only at a subsequent stage or not at all in many cases. He cited several examples of enhancing user participation in liability mechanisms including notice and notice, strict litigation sanction inferring the knowledge that the content might have been legal and shifting the presumption in favor of the users and the reverse notice and takedown procedure. He also raised the important question, if multistakeholder cooperation is sufficient or adequate to enable the users to have a say and enter as part of the social construct in this space? Reminding the participants of the failure of the multistakeholder agreement process regarding the cost for the filters in the UK, that would be imposed according to judicial procedure, he called for strengthening our efforts to enable users to get more involved in protecting their rights online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Gabrielle Guillemin from Article 19 presented her research on the types of intermediaries and models of liability in place across jurisdictions. Pointing to the problems associated with intermediaries having to monitor content and determine legality of content, she called for procedural safeguards and stressed the need to place the dispute back in the hands of users and content owners and the person who has written the content rather than the intermediary. She goes on to provide some useful and practically-grounded solutions to strengthen existing takedown mechanisms including, adding details to the notices, introducing fees in order to extend the number of claims that are made and defining procedure regards criminal content.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Elonnai Hickok introduced CIS' research to the UNESCO report Fostering Freedom Online: the Role of Internet Intermediaries, comparing a range of liability models in different stages of development and provisions across jurisdictions. She argued for a liability framework that tackles procedural and regulatory uncertainty, lack of due process, lack of remedy and varying content criteria.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Francisco Vera, Advocacy Director, Derechos Digitales from Chile raised issues related to mindful community policy-making expounding on Chile's implementation of intermediary liability obligation with the USA, the introduction of judicial oversight under Chilean legislation which led to US objection to Chile on grounds of not fulfilling their standards in terms of Internet property protection. He highlighted the tensions that arise in balancing the needs of the multiple communities and interests engaged over common resources and stressed the need for evidence in policy-making to balance the needs of rights holders and public interest. He stressed the need for evidence to inform policy-making and ensure it keeps pace with technological developments citing the example of the ongoing Transpacific Partnership Agreement negotiations that call for exporting provisions DMCA provisions to 11 countries even though there is no evidence of the success of the system for public interest. He concluded by cautioning against the development of frameworks that are or have the potential to be used as anti-competitive mechanisms that curtail innovation and therby do not serve public interest.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Malcolm Hutty associated with the European Internet Service Providers Association, Chair of the Intermediary Reliability Committee and London Internet Exchange brought in the intermediaries' perspective into the discussion. He argued for challenging the link between liability and forced cooperation, understated the problems arising from distinction without a difference and incentives built in within existing regimes. He raised issues arising from the expectancy on the part of those engaged in pre-emptive regulation of unwanted or undesirable content for intermediaries to automate content. Pointing to the increasing impact of intermediaries in our lives he underscored how exposing vast areas of people's lives to regulatory enforce, which enhances power of the state to implement public policy in the public interest and expect it to be executed, can have both positive and negative implications on issues such as privacy and freedom of expression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He called out practices in regulatory regimes that focus on one size fits all solutions such as seeking automating filters on a massive scale and instead called for context and content specific solutions, that factor the commercial imperatives of intermediaries. He also addressed the economic consequences of liability frameworks to the industry including cost effectiveness of balancing rights, barriers to investments that arise in heavily regulated or new types of online services that are likely to be the targeted for specific enforcement measures and the long term costs of adapting old enforcement mechanisms that apply, while networks need to be updated to extend services to users.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The workshop presented evidence of a variety of approaches and the issues that arise in applying those approaches to impose liability on intermediaries. Two choices emerged towards developing frameworks for enforcing responsibility on intermediaries. We could either rely on a traditional approach, essentially court-based and off-line mechanisms for regulating behaviour and disputes. The downside of this is it will be slow and costly to the public purse. In particular, we will lose a great deal of the opportunity to extend regulation much more deeply into people's lives so as to implement the public interest.&lt;br /&gt;&lt;br /&gt;Alternatively, we could rely on intermediaries to develop and automate systems to control our online behaviour. While this approach does not suffer from efficiency problems of the earlier approach it does lack, both in terms of hindering the developments of the Information Society, and potentially yielding up many of the traditionally expected protections under a free and liberal society. The right approach lies somewhere in the middle and development of International Principles for Intermediary Liability, announced at the end of the workshop, is a step closer to the developing a balanced framework for liability.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;See the &lt;a class="external-link" href="http://www.intgovforum.org/cms/174-igf-2014/transcripts/1968-2014-09-03-ws206-an-evidence-based-liability-policy-framework-room-5"&gt;transcript on IGF website&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/report-on-cis-workshop-at-igf'&gt;https://cis-india.org/internet-governance/report-on-cis-workshop-at-igf&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>jyoti</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance Forum</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    

   <dc:date>2014-09-24T10:47:30Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/regulation-of-cross-border-transfers-of-personal-data-in-asia">
    <title>CIS contributes to ABLI Compendium on Regulation of Cross-Border Transfers of Personal Data in Asia</title>
    <link>https://cis-india.org/internet-governance/blog/regulation-of-cross-border-transfers-of-personal-data-in-asia</link>
    <description>
        &lt;b&gt;The Asian Business Law Institute, based in Singapore published a compendium on “Regulation of cross-border transfer of personal data in Asia”.  This was part of an exercise to explore legal convergence around issues such as data protection, enforcement of foreign judgments and principle of restructuring in Asia.&lt;/b&gt;
        
&lt;p style="text-align: justify;"&gt;The compendium contains 14 detailed reports written by legal practitioners, legal scholars and researchers in their respective jurisdictions, on the regulation of cross-border data transfers in the wider Asian region (Australia, China, Hong Kong SAR, India, Indonesia, Japan, South Korea, Macau SAR, Malaysia, New Zealand, Philippines, Singapore, Thailand, and Vietnam).&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The compendium is intended to act as a springboard for the next phase of ABLI's project, which will be devoted to the in-depth study of the differences and commonalities between Asian legal systems on these issues and – where feasible – the drafting of recommendations and/or policy options to achieve convergence in this area of law in Asia.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify;"&gt;The chapter titled Jurisdictional Report India was authored by Amber Sinha and Elonnai Hickok. The compendium can be &lt;a class="external-link" href="http://abli.asia/PUBLICATIONS/Data-Privacy-Project"&gt;accessed here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/regulation-of-cross-border-transfers-of-personal-data-in-asia'&gt;https://cis-india.org/internet-governance/blog/regulation-of-cross-border-transfers-of-personal-data-in-asia&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha and Elonnai Hickok</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-06-03T15:10:11Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/economic-times-june-10-2018-sunil-abraham-why-npci-and-facebook-need-urgent-regulatory-attention">
    <title>Why NPCI and Facebook need urgent regulatory attention </title>
    <link>https://cis-india.org/internet-governance/blog/economic-times-june-10-2018-sunil-abraham-why-npci-and-facebook-need-urgent-regulatory-attention</link>
    <description>
        &lt;b&gt;The world’s oldest networked infrastructure, money, is increasingly dematerialising and fusing with the world’s latest networked infrastructure, the Internet. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="https://economictimes.indiatimes.com/industry/banking/finance/banking/why-npci-and-facebook-need-urgent-regulatory-attention/articleshow/64522587.cms"&gt;Economic Times&lt;/a&gt; on June 10, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;As the network effects compound, disruptive acceleration hurtle us towards financial utopia, or dystopia. Our fate depends on what we get right and what we get wrong with the law, code and architecture, and the market.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Internet, unfortunately, has completely transformed from how it was first architected. From a federated, generative network based on free software and open standards, into a centralised, environment with an increasing dependency on proprietary technologies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In countries like Myanmar, some citizens misconstrue a single social media website, Facebook, for the internet, according to LirneAsia research. India is another market where Facebook could still get its brand mistaken for access itself by some users coming online. This is Facebook put so many resources into the battle over Basics, in the run-up to India’s network neutrality regulation. an odd corporation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On hand, its business model is what some term surveillance capitalism. On the other hand, by acquiring WhatsApp and by keeping end-toend (E2E) encryption “on”, it has ensured that one and a half billion users can concretely exercise their right to privacy. At the time of the acquisition, WhatsApp founders believed Facebook’s promise that it would never compromise on their high standards of privacy and security. But 18 months later, Facebook started harvesting data and diluting E2E.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In April this year, my colleague Ayush Rathi and I wrote in Asia Times that WhatsApp no longer deletes multimedia on download but continues to store it on its servers. Theoretically, using the very same mechanism, Facebook could also be retaining encrypted text messages and comprehensive metadata from WhatsApp users indefinitely without making this obvious.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;My friend, Srikanth Lakshmanan, founder of the CashlessConsumer collective, is a keen observer of this space. He says in India, “we are seeing an increasing push towards a bank-led model, thanks to National Payments Corporation of India (NPCI) and its control over Unified Payments Interface (UPI), which is also known as the cashless layer of the India Stack.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;NPCI is best understood as a shape shifter. Arundhati Ramanathan puts it best when she says “depending on the time and context, NPCI is a competitor. It is a platform. It is a regulator. It is an industry association. It is a profitable non-profit. It is a rule maker. It is a judge. It is a bystander.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This results in UPI becoming, what Lakshmanan calls, a NPCI-club-good rather than a new generation digital public good. He also points out that NPCI has an additional challenge of opacity — “it doesn’t provide any metrics on transaction failures, and being a private body, is not subject to proactive or reactive disclosure requirements under the RTI.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Technically, he says, UPI increases fragility in our financial ecosystem since it “is a centralised data maximisation network where NPCI will always have the superset of data.” Given that NPCI has opted for a bank-led model in India, it is very unlikely that Facebook able to leverage its monopoly the social media market duopoly it shares with in the digital advertising market to become a digital payments monopoly.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, NCPI and Facebook both share the following traits — one, an insatiable appetite for personal information; two, a fetish for hypercentralisation; three, a marginal commitment to transparency, and four, poor track record as a custodian of consumer trust. The marriage between these like-minded entities has already had a dubious beginning.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Previously, every financial technology wanting direct access to the NPCI infrastructure had to have a tie-up with a bank. But for Facebook and Google, as they are large players, it was decided to introduce a multi-bank model. This was definitely the right thing to do from a competition perspective. But, unfortunately, the marriage between the banks and the internet giant was arranged by NPCI in an opaque process and WhatsApp was exempted from the full NPCI certification process for its beta launch.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Both NPCI and Facebook need urgent regulatory attention. A modern data protection law and a more proactive competition regulator is required for Facebook. The NPCI will hopefully also be subjected to the upcoming data protection law. But it also requires a range of design, policy and governance fixes to ensure greater privacy and security via data minimisation and decentralisation; greater accountability and transparency to the public; separation of powers for better governance and open access policies to prevent anti-competitive behaviour.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/economic-times-june-10-2018-sunil-abraham-why-npci-and-facebook-need-urgent-regulatory-attention'&gt;https://cis-india.org/internet-governance/blog/economic-times-june-10-2018-sunil-abraham-why-npci-and-facebook-need-urgent-regulatory-attention&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-06-12T02:07:42Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework">
    <title>The AI Task Force Report - The first steps towards India’s AI framework </title>
    <link>https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework</link>
    <description>
        &lt;b&gt;The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post was edited by Swagam Dasgupta. &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-task-force-report.pdf"&gt;Download &lt;strong&gt;PDF&lt;/strong&gt; here&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;span style="text-align: justify; "&gt;The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors&lt;/span&gt;&lt;a name="_ftnref1"&gt;&lt;/a&gt;&lt;span style="text-align: justify; "&gt; and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver,&lt;/span&gt;&lt;a name="_ftnref2"&gt;&lt;/a&gt;&lt;span style="text-align: justify; "&gt; the Report attempts to answer three policy questions:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are the areas where government should play a role?&lt;/li&gt;
&lt;li&gt;How can AI improve quality of life and solve problems at scale for Indian citizens?&lt;/li&gt;
&lt;li&gt;What are the sectors that can generate employment and growth by the use of AI technology?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="text-align: justify; "&gt;This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Sectors of Relevance and Challenges&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education.&lt;a name="_ftnref3"&gt;&lt;/a&gt; While these ten domains are part of the 16 domains of focus listed in the AITF’s web page,&lt;a name="_ftnref4"&gt;&lt;/a&gt; it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.&lt;a name="_ftnref5"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data.&lt;/span&gt;&lt;a name="_ftnref6"&gt;&lt;/a&gt;&lt;span&gt; The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges.&lt;/span&gt;&lt;a name="_ftnref7"&gt;&lt;/a&gt;&lt;span&gt; The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.&lt;/span&gt;&lt;a name="_ftnref8"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies.&lt;a name="_ftnref9"&gt;&lt;/a&gt; In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.&lt;a name="_ftnref10"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards.&lt;a name="_ftnref11"&gt;&lt;/a&gt; In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.&lt;a name="_ftnref12"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived.&lt;a name="_ftnref13"&gt;&lt;/a&gt; Therefore, AI could have a large positive impact through inclusive growth and empowerment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics.&lt;a name="_ftnref14"&gt;&lt;/a&gt; Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth.&lt;a name="_ftnref15"&gt;&lt;/a&gt; The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.&lt;/p&gt;
&lt;h3&gt;Areas in need of Government Intervention&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion.&lt;a name="_ftnref16"&gt;&lt;/a&gt; While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Participation in International Forums&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Key Enablers to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI.&lt;a name="_ftnref17"&gt;&lt;/a&gt; Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data and Data Marketplaces&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court,&lt;a name="_ftnref18"&gt;&lt;/a&gt; the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.&lt;a name="_ftnref19"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it.&lt;/span&gt;&lt;a name="_ftnref20"&gt;&lt;/a&gt;&lt;span&gt; It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;strong&gt;Innovation and Patents&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes.&lt;a name="_ftnref21"&gt;&lt;/a&gt; If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted.&lt;a name="_ftnref22"&gt;&lt;/a&gt; Given the controversies over CRIs that have dotted the Indian patent landscape&lt;a name="_ftnref23"&gt;&lt;/a&gt;, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Ethical AI framework&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Responsible AI&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Privacy and Data&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report also recognises the significance of the implementation of the Aadhaar Act&lt;a name="_ftnref24"&gt;&lt;/a&gt;, the privacy judgement&lt;a name="_ftnref25"&gt;&lt;/a&gt; and the proposed data protection laws&lt;a name="_ftnref26"&gt;&lt;/a&gt;, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data.&lt;a name="_ftnref27"&gt;&lt;/a&gt; Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog,&lt;a name="_ftnref28"&gt;&lt;/a&gt; this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI.&lt;a name="_ftnref29"&gt;&lt;/a&gt; Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.&lt;a name="_ftnref30"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;strong&gt;Liability&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;AI and Employment &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk&lt;a name="_ftnref31"&gt;&lt;/a&gt;, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes.&lt;a name="_ftnref32"&gt;&lt;/a&gt; Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Education and Re-skilling&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component.&lt;a name="_ftnref33"&gt;&lt;/a&gt; Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Policy Recommendations &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Task Force&lt;sub&gt;,&lt;/sub&gt; in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;National Artificial Intelligence Mission&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups.&lt;a name="_ftnref34"&gt;&lt;/a&gt; The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment.&lt;a name="_ftnref35"&gt;&lt;/a&gt; Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge&lt;a name="_ftnref36"&gt;&lt;/a&gt; and Japan’s sandbox for self driving trucks.&lt;a name="_ftnref37"&gt;&lt;/a&gt; The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.&lt;a name="_ftnref38"&gt;&lt;/a&gt;Additionally, government engagement in AI projects has thus far been fragmented&lt;a name="_ftnref39"&gt;&lt;/a&gt;and a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition&lt;a name="_ftnref40"&gt;&lt;/a&gt; that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Other recommendations&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.”&lt;a name="_ftnref41"&gt;&lt;/a&gt; Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The recommendations propose establishing operation standards for data storage and  privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’&lt;a name="_ftnref42"&gt;&lt;/a&gt; needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The inconspicuous release of the Report as well as the lack of a call for public comments&lt;a name="_ftnref43"&gt;&lt;/a&gt; results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Compared to the recent Niti Aayog Discussion Paper&lt;/span&gt;&lt;a name="_ftnref44"&gt;&lt;/a&gt;&lt;span&gt;, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn1"&gt;&lt;/a&gt;&lt;span&gt; The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. &lt;/span&gt;&lt;a href="https://www.aitf.org.in/"&gt;https://www.aitf.org.in//&lt;/a&gt;&lt;span&gt; There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn2"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 1,&lt;span&gt;http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn3"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn4"&gt;&lt;/a&gt; The Artificial Intelligence Task Force https://www.aitf.org.in/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn5"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 8&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn6"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 9,10.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn7"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 9&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn8"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn9"&gt;&lt;/a&gt; Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn10"&gt;&lt;/a&gt;Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report   _02.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn11"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 21.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn12"&gt;&lt;/a&gt; Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn13"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 22&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn14"&gt;&lt;/a&gt; Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn15"&gt;&lt;/a&gt; Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn16"&gt;&lt;/a&gt;It is not just technical experts  that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn17"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 31&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn18"&gt;&lt;/a&gt;Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn19"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn20"&gt;&lt;/a&gt; CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn21"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 30&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn22"&gt;&lt;/a&gt; Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn23"&gt;&lt;/a&gt;Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn24"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 37&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn25"&gt;&lt;/a&gt;The Report on the Artificial Intelligence Task Force, Pg. 7&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn26"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn27"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 8&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn28"&gt;&lt;/a&gt; National Strategy for Artificial Intelligence: &lt;a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf"&gt;http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn29"&gt;&lt;/a&gt; Meaningful information and the right to explanation,Andrew D Selbst  Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn30"&gt;&lt;/a&gt; The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn31"&gt;&lt;/a&gt; M-Turk https://www.mturk.com/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn32"&gt;&lt;/a&gt; For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn33"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 41&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn34"&gt;&lt;/a&gt; Report of Artificial Intelligence Task Force Pg, 46, 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn35"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn36"&gt;&lt;/a&gt;The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn37"&gt;&lt;/a&gt;Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn38"&gt;&lt;/a&gt; Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn39"&gt;&lt;/a&gt;&lt;a href="https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977"&gt;https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977&lt;/a&gt;,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn40"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn41"&gt;&lt;/a&gt; Report of Artificial Intelligence Task Force Pg. 49&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn42"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn43"&gt;&lt;/a&gt; The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn44"&gt;&lt;/a&gt;National Strategy for Artificial Intelligence: &lt;a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf"&gt;http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework'&gt;https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Shweta Mohandas and Swaraj Paul Barooah</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-06-27T14:32:56Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/a2k/blogs/ace-7-french-charter-cis-comment">
    <title>Comment by CIS at ACE on Presentation on French Charter on the Fight against Cyber-Counterfeiting</title>
    <link>https://cis-india.org/a2k/blogs/ace-7-french-charter-cis-comment</link>
    <description>
        &lt;b&gt;The seventh session of the World Intellectual Property Organization's Advisory Committee on Enforcement is being held in Geneva on November 30 and December 1, 2011. Pranesh Prakash responded to a presentation by Prof. Pierre Sirinelli of the École de droit de la Sorbonne, Université Paris 1 on 'The French Charter on the Fight against Cyber-Counterfeiting of December 16, 2009' with this comment.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Thank you, Chair.&amp;nbsp; I speak on behalf of the Centre for Internet and Society.&amp;nbsp; First, I would like to congratulate you on your re-election.&lt;br /&gt;&lt;br /&gt;And I would like to congratulate Prof. Sirenelli on his excellent presentation.&lt;br /&gt;&lt;br /&gt;I would like to flag a few points, though:&lt;/p&gt;
&lt;ol&gt;&lt;li&gt;One of the benefits of normal laws, as opposed to the soft/plastic laws, which he champions, is that normal laws are bound by procedures established by law, due process requirements, and principles of natural justice.&amp;nbsp; Unfortunately, the soft/plastic laws, which in essence are private agreements, are not.&lt;/li&gt;&lt;li&gt;The report of the UN Special Rapporteur on the Freedom of Expression and Opinion made it clear in his report to the UN Human Rights Council that the Internet is now an intergral part of citizens exercising their right of freedom of speech under national constitutions and under the Universal Declaration of Human Rights.&amp;nbsp; That report highlights that many initiatives on copyright infringement, including that of the French government with HADOPI and the UK, actually contravene the Universal Declaration of Human Rights&lt;/li&gt;&lt;li&gt;The right of privacy is also flagged by many as something that will have to be compromised if such private enforcement of copyright is encouraged.&lt;br /&gt;&lt;/li&gt;&lt;/ol&gt;
&lt;p&gt;I'd like to know Prof. Sirinelli's views on these three issues: due process, right of freedom of speech, and the right to privacy.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/a2k/blogs/ace-7-french-charter-cis-comment'&gt;https://cis-india.org/a2k/blogs/ace-7-french-charter-cis-comment&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>pranesh</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Access to Knowledge</dc:subject>
    
    
        <dc:subject>Copyright</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Intellectual Property Rights</dc:subject>
    
    
        <dc:subject>Piracy</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    
    
        <dc:subject>WIPO</dc:subject>
    

   <dc:date>2011-12-01T11:59:45Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function">
    <title>How Function Of State May Limit Informed Consent: Examining Clause 12 Of The Data Protection Bill</title>
    <link>https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function</link>
    <description>
        &lt;b&gt;The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.&lt;/b&gt;
        &lt;p&gt;The blog post was &lt;a class="external-link" href="https://www.medianama.com/2022/02/223-data-protection-bill-consent-clause-state-function/"&gt;published in Medianama&lt;/a&gt; on February 18, 2022. This is the first of a two-part series by Amber Sinha.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In 2018, hours after the Committee of Experts led by Justice Srikrishna Committee released their report and draft bill, I wrote &lt;a href="https://www.livemint.com/Opinion/zY8NPWoWWZw8AfI5JQhjmL/Draft-privacy-bill-and-its-loopholes.html"&gt;an opinion piece&lt;/a&gt; providing my quick take on what was good and bad about the bill. A section of my analysis focused on Clause 12 (then Clause 13) which provides for non-consensual processing of personal data for state functions. I called this provision a ‘carte-blanche’ which effectively allowed the state to process a citizen’s data for practically all interactions between them without having to deal with the inconvenience of seeking consent. My former colleague, Pranesh Prakash &lt;a href="https://twitter.com/pranesh/status/1023116679440621568"&gt;pointed out&lt;/a&gt; that this was not a correct interpretation of the provision as I had missed the significance of the word ‘necessary’ which was inserted to act as a check on the powers of the state. He also pointed out, correctly, that in its construction, this provision is equivalent to the position in European General Data Protection Regulation (Article 6 (i) (e)), and is perhaps even more restrictive.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While I agree with what Pranesh says above (his claims are largely factual, and there can be no basis for disagreement), my view of Clause 12 has not changed. While Clause 35 has been a focus of considerable discourse and analysis, for good reason, I continue to believe that Clause 12 remains among the most dangerous provisions of this bill, and I will try to unpack here, why.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Data Protection Bill 2021 has a chapter on the grounds for processing personal data, and one of those grounds is consent by the individual. The rest of the grounds deal with various situations in which personal data can be processed without seeking consent from the individual. Clause 12 lays down one of the grounds. It allows the state to process data without the consent of the individual in the following cases —&lt;/p&gt;
&lt;p&gt;a)  where it is necessary to respond to a medical emergency&lt;br /&gt;b)  where it is necessary for state to provide a service or benefit to the individual&lt;br /&gt;c)  where it is necessary for the state to issue any certification, licence or permit&lt;br /&gt;d)  where it is necessary under any central or state legislation, or to comply with a judicial order&lt;br /&gt;e)  where it is necessary for any measures during an epidemic, outbreak or public health&lt;br /&gt;f)  where it is necessary for safety procedures during disaster or breakdown of public order&lt;/p&gt;
&lt;p&gt;In order to carry out (b) and (c), there is also the added requirement that the state function must be authorised by law.&lt;/p&gt;
&lt;h2&gt;Twin restrictions in Clause 12&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The use of the words ‘necessary’ and ‘authorised by law’ is intended to pose checks on the powers of the state. The first restriction seeks to limit actions to only those cases where the processing of personal data would be necessary for the exercise of the state function. This should mean that if the state function can be exercised without non-consensual processing of personal data, then it must be done so. Therefore, while acting under this provision, the state should only process my data if it needs to do so, to provide me with the service or benefit. The second restriction means that this would apply to only those state functions which are authorised by law, meaning only those functions which are supported by validly enacted legislation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;What we need to keep in mind regarding Clause 12 is that the requirement of ‘authorised by law’ does not mean that legislation must provide for that specific kind of data processing. It simply means that the larger state function must have legal backing. The danger is how these provisions may be used with broad mandates. If the activity in question is non-consensual collection and processing of, say, demographic data of citizens to create state resident hubs which will assist in the provision of services such as healthcare, housing, and other welfare functions; all that may be required is that the welfare functions are authorised by law.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Scope of privacy under Puttaswamy&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;It would be worthwhile, at this point, to delve into the nature of restrictions that the landmark Puttaswamy judgement discussed that the state can impose on privacy. The judgement clearly identifies the principles of informed consent and purpose limitation as central to informational privacy. As discussed repeatedly during the course of the hearings and in the judgement, privacy, like any other fundamental right, is not absolute. However, restrictions on the right must be reasonable in nature. In the case of Clause 12, the restrictions on privacy in the form of denial of informed consent need to be tested against a constitutional standard. In Puttaswamy, the bench ​was ​not ​required ​to ​provide ​a ​legal ​test ​to ​determine ​the ​extent ​and ​scope ​of the ​right ​to ​privacy, but they do provide sufficient ​guidance ​for ​us ​to ​contemplate ​how ​the ​limits ​and ​scope ​of ​the ​constitutional ​right ​to ​privacy ​could ​be ​determined ​in ​future ​cases.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Puttaswamy judgement clearly states that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.” By locating the right not just in Article 21 but also in the entirety of Part III, the bench clearly requires that “the drill of various Articles to which the right relates must be scrupulously followed.” This means that where transgressions on privacy relate to different provisions in Part III, the different tests under those provisions will apply along with those in Article 21. For instance, where the restrictions relate to personal freedoms, the tests under both Article 19 (right to freedoms) and Article 21 (right to life and liberty) will apply.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the case of Clause 12, the three tests laid down by Justice Chandrachud are most operative —&lt;br /&gt;a) the existence of a “law”&lt;br /&gt;b) a “legitimate State interest”&lt;br /&gt;c) the requirement of “proportionality”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The first test is already reflected in the use of the phrase ‘authorised by law’ in Clause 12. The test under Article 21 would imply that the function of the state should not merely be authorised by law, but that the law, in both its substance and procedure, must be ‘fair, just and reasonable.’ The next test is that of ‘legitimate state interest’. In its report, the Joint Parliamentary Committee places emphasis on Justice Chandrachud’s use of “allocation of resources for human development” in an illustrative list of legitimate state interests. The report claims that the ground, functions of the state, thus satisfies the legitimate state interest. We do not dispute this claim.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Proportionality and Clause 12&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;It is the final test of ‘proportionality’ articulated by the Puttaswamy judgement, which is most operative in this context. Unlike Clauses 42 and 43 which include the twin tests of necessity and proportionality, the committee has chosen to only employ one ground in Clause 12. Proportionality is a commonly employed ground in European jurisprudence and common law countries such as Canada and South Africa, and it is also an integral part of Indian jurisprudence. As commonly understood, the proportionality test consists of three parts —&lt;/p&gt;
&lt;p&gt;a)  the limiting measures must be carefully designed, or rationally connected, to the objective&lt;br /&gt;b)  they must impair the right as little as possible&lt;br /&gt;c)  the effects of the limiting measures must not be so severe on individual or group rights that the legitimate state interest, albeit important, is outweighed by the abridgement of rights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The first test is similar to the test of proximity under Article 19. The test of ‘necessity’ in Clause 12 must be viewed in this context. It must be remembered that the test of necessity is not limited to only situations where it may not be possible to obtain consent while providing benefits. My reservations with the sufficiency of this standard stem from observations made in the report, as well as the relatively small amount of jurisprudence on this term in Indian law.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Srikrishna Report interestingly mentions three kinds of scenarios where consent should not be required — where it is not appropriate, necessary, or relevant for processing. The report goes on to give an example of inappropriateness. In cases where data is being gathered to provide welfare services, there is an imbalance in power between the citizen and the state. Having made that observation, the committee inexplicably arrives at a conclusion that the response to this problem is to further erode the power available to citizens by removing the need for consent altogether under Clause 12. There is limited jurisprudence on the standard of ‘necessity’ under Indian law. The Supreme Court has articulated this test as ‘having reasonable relation to the object the legislation has in view.’ If we look elsewhere for guidance on how to read ‘necessity’, the ECHR in Handyside v United Kingdom held it to be neither “synonymous with indispensable” nor does it have the “flexibility of such expressions as admissible, ordinary, useful, reasonable or desirable.” In short, there must be a pressing social need to satisfy this ground.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, the other two tests of proportionality do not find a mention in Clause 12 at all. There is no requirement of ‘narrow tailoring’, that the scope of non-consensual processing must impair the right as little as possible. It is doubly unfortunate that this test does not find a place, as unlike necessity, ‘narrow tailoring’ is a test well understood in Indian law. This means that while there is a requirement to show that processing personal data was necessary to provide a service or benefit, there is no requirement to process data in a way that there is minimal non-consensual processing. The fear is that as long as there is a reasonable relation between processing data and the object of the function of state, state authorities and other bodies authorised by it, do not need to bother with obtaining consent.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Similarly, the third test of proportionality is also not represented in this provision. It provides a test between the abridgement of individual rights and legitimate state interest in question, and it requires that the first must not outweigh the second. The absence of the proportionality test leaves Clause 12 devoid of any such consideration. Therefore, as long as the test of necessity is met under this law, it need not evaluate the denial of consent against the service or benefit that is being provided.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state, by setting the threshold to circumvent informed consent extremely low. In the next post, I will demonstrate the ease with which Clause 12 can allow indiscriminate data sharing by focusing on the Indian government’s digital healthcare schemes.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function'&gt;https://cis-india.org/internet-governance/blog/medianama-february-18-2021-amber-sinha-data-protection-bill-consent-clause-state-function&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Data Governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2022-03-01T14:56:49Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study">
    <title>Clause 12 Of The Data Protection Bill And Digital Healthcare: A Case Study</title>
    <link>https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study</link>
    <description>
        &lt;b&gt;In light of the state’s emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?&lt;/b&gt;
        &lt;p&gt;The blog post was &lt;a class="external-link" href="https://www.medianama.com/2022/02/223-data-protection-bill-digital-healthcare-case-study/"&gt;published in Medianama&lt;/a&gt; on February 21, 2022. This is the second in a two-part series by Amber Sinha.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In the &lt;a href="https://www.medianama.com/2022/02/223-data-protection-bill-consent-clause-state-function/"&gt;previous post&lt;/a&gt;, I looked at provisions on non-consensual data processing for state functions under the most recent version of recommendations by the Joint Parliamentary Committee on India’s Data Protection Bill (DPB). The true impact of these provisions can only be appreciated in light of ongoing policy developments and real-life implications.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To appreciate the significance of the dilutions in Clause 12, let us consider the Indian state’s range of schemes promoting digital healthcare. In July 2018, NITI Aayog, a central government policy think tank in India released a strategy and approach paper (Strategy Paper) on the formulation of the National Health Stack which envisions the creation of a federated application programming interface (API)-enabled health information ecosystem. While the Ministry of Health and Family Welfare has focused on the creation of Electronic Health Records (EHR) Standards for India during the last few years and also identified a contractor for the creation of a centralised health information platform (IHIP), this Strategy Paper advocates a completely different approach, which is described as a Personal Health Records (PHR) framework. In 2021, the National Digital Health Mission (NDHM) was launched under which a citizen shall have the option to obtain a digital health ID. A digital health ID is a unique ID and will carry all health records of a person.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;A Stack Model for Big Data Ecosystem in Healthcare&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;A stack model as envisaged in the Strategy Paper, consists of several layers of open APIs connected to each other, often tied together by a unique health identifier. The open nature of APIs has the advantage that it allows public and private actors to build solutions on top of it, which are interoperable with all parts of the stack. It is however worth considering both the ‘openness’ and the role that the state plays in it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Even though the APIs are themselves open, they are a part of a pre-decided technological paradigm, built by private actors and blessed by the state. Even though innovators can build on it, the options available to them are limited by the information architecture created by the stack model. When such a technological paradigm is created for healthcare reform and health data, the stack model poses additional challenges. By tying the stack model to the unique identity, without appropriate processes in place for access control, siloed information, and encrypted communication, the stack model poses tremendous privacy and security concerns. The broad language under Clause 12 of the DPB needs to be looked at in this context.&lt;/p&gt;
&lt;p&gt;Clause 12 allows non-consensual processing of personal data where it is necessary “for the performance of any function of the state authorised by law” in order to provide a service or benefit from the State. In the previous post, I had highlighted the import of the use of only ‘necessity’ to the exclusion of ‘proportionality’. Now, we need to consider its significance in light of the emerging digital healthcare apparatus being created by the state.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Health Stack and National Digital Health Mission together envision an intricate system of data collection and exchange which in a regulatory vacuum would ensure unfettered access to sensitive healthcare data for both the state and private actors registered with the platforms. The Stack framework relies on repositories where data may be accessed from multiple nodes within the system. Importantly, the Strategy Paper also envisions health data fiduciaries to facilitate consent-driven interaction between entities that generate the health data and entities that want to consume the health records for delivering services to the individual. The cast of characters involve the National Health Authority, health care providers and insurers who access the National Health Electronic Registries, unified data from different programmes such as National Health Resource Repository (NHRR), NIN database, NIC and the Registry of Hospitals in Network of Insurance (ROHINI), private actors such as Swasth, iSpirt who assist the Mission as volunteers. The currency that government and private actors are interested in is data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The promised benefits of healthcare data in an anonymised and aggregate form range from Disease Surveillance to Pharmacovigilance as well as Health Schemes Management Systems and Nutrition Management, benefits which have only been more acutely emphasised during the pandemic. However, the pandemic has also normalised the sharing of sensitive healthcare data with a variety of actors, without much thinking on much-needed data minimisation practises.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The potential misuses of healthcare data include greater state surveillance and control, predatory and discriminatory practices by private actors which rely on Clause 12 to do away with even the pretense of informed consent so long as the processing of data is deemed necessary by the state and its private sector partners to provide any service or benefit.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Subclause (e) in Clause 12, which was added in the last version of the Bill drafted by MeitY and has been retained by the JPC, allows processing wherever it is necessary for ‘any measures’ to provide medical treatment or health services during an epidemic, outbreak or threat to public health. Yet again, the overly-broad language used here is designed to ensure that any annoyances of informed consent can be easily brushed aside wherever the state intends to take any measures under any scheme related to public health.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Effectively, how does the framework under Clause 12 alter the consent and purpose limitation model? Data protection laws introduce an element of control by tying purpose limitation to consent. Individuals provide consent to specified purposes, and data processors are required to respect that choice. Where there is no consent, the purposes of data processing are sought to be limited by the necessity principle in Clause 12. The state (or authorised parties) must be able to demonstrate necessity to the exercise of state function, and data must only be processed for those purposes which flow out of this necessity. However, unlike the consent model, this provides an opportunity to keep reinventing purposes for different state functions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the absence of a data protection law, data collected by one agency is shared indiscriminately with other agencies and used for multiple purposes beyond the purpose for which it was collected. The consent and purpose limitation model would have addressed this issue. But, by having a low threshold for non-consensual processing under Clause 12, this form of data processing is effectively being legitimised.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study'&gt;https://cis-india.org/internet-governance/blog/medianama-february-21-2022-amber-sinha-data-protection-bill-digital-healthcare-case-study&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Data Governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2022-03-01T15:07:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/hindustan-times-pranesh-prakash-april-3-2017-aadhaar-marks-a-fundamental-shift-in-citizen-state-relations">
    <title>Aadhaar marks a fundamental shift in citizen-state relations: From ‘We the People’ to ‘We the Government’</title>
    <link>https://cis-india.org/internet-governance/blog/hindustan-times-pranesh-prakash-april-3-2017-aadhaar-marks-a-fundamental-shift-in-citizen-state-relations</link>
    <description>
        &lt;b&gt;Your fingerprints, iris scans, details of where you shop. Compulsory Aadhaar means all this data is out there. And it’s still not clear who can view or use it.&lt;/b&gt;
        &lt;p&gt;The article was published in the &lt;a class="external-link" href="http://www.hindustantimes.com/india-news/what-s-really-happening-when-you-swipe-your-aadhaar-card-to-make-a-payment/story-2fLTO5oNPhq1wyvZrwgNgJ.html"&gt;Hindustan Times&lt;/a&gt; on April 3, 2017.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: center; "&gt;&lt;img src="https://cis-india.org/home-images/Aaadhaar.png" alt="Aadhaar" class="image-inline" title="Aadhaar" /&gt;&lt;br /&gt;Until recently, people were allowed to opt out of Aadhaar and withdraw consent to have their data stored. This is no longer going to be an option.&lt;br /&gt;(Siddhant Jumde / HT Illustration)&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;Imagine you’re walking down the street and you point the camera on your phone at a crowd of people in front of you. An app superimposes on each person’s face a partially-redacted name, date of birth, address, whether she’s undergone police verification, and, of course, an obscured Aadhaar number.&lt;br /&gt;&lt;br /&gt;OnGrid, a company that bills itself as a “trust platform” and offers “to deliver verifications and background checks”, used that very imagery in an advertisement last month. Its website notes that “As per Government regulations, it is mandatory to take consent of the individual while using OnGrid”, but that is a legal requirement, not a technical one.&lt;br /&gt;&lt;br /&gt;Since every instance of use of Aadhaar for authentication or for financial transactions leaves behind logs in the Unique Identification Authority of India’s (UIDAI) databases, the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software. The space for digital identities as divorced from legal identities gets removed. Clearly, Aadhaar has immense potential for profiling and surveillance. Our only defence: law that is weak at best and non-existent at worst.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Aadhaar Act and Rules don’t limit the information that can be gathered from you by the enrolling agency; it doesn’t limit how Aadhaar can be used by third parties (a process called ‘seeding’) if they haven’t gathered their data from UIDAI; it doesn’t require your consent before third parties use your Aadhaar number to collate records about you (eg, a drug manufacturer buying data from various pharmacies, and creating profiles using Aadhaar).&lt;br /&gt;&lt;br /&gt;It even allows your biometrics to be shared if it is “in the interest of national security”. The law offers provisions for UIDAI to file cases (eg, for multiple enrollments), but it doesn’t allow citizens to file a case against private parties or the government for misuse of Aadhaar or identity fraud, or data breach.&lt;br /&gt;&lt;br /&gt;It is also clear that the government opposes any privacy-related improvements to the law. After debating the Aadhaar Bill in March 2016, the Rajya Sabha passed an amendment by MP Jairam Ramesh that allowed people to opt out of Aadhaar, and withdraw their consent to UIDAI storing their data, if they had other means of proving their identity (thus allowing Aadhaar to remain an enabler).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But that amendment, as with all amendments passed in the Rajya Sabha, was rejected by the Lok Sabha, allowing the government to make Aadhaar mandatory, and depriving citizens of consent. While the Aadhaar Act requires a person’s consent before collecting or using Aadhaar-provided details, it doesn’t allow for the revocation of that consent.&lt;br /&gt;&lt;br /&gt;In other countries, data security laws require that a person be notified if her data has been breached. In response to an RTI application asking whether UIDAI systems had ever been breached, the Authority responded that the information could not be disclosed for reasons of “national security”.&lt;br /&gt;&lt;br /&gt;The citizen must be transparent to the state, while the state will become more opaque to the citizen.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;How Did Aadhaar Change?&lt;/h2&gt;
&lt;table class="invisible"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: justify; "&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;How did Aadhaar become the behemoth it is today, with it being mandatory for hundreds of government programmes, and even software like Skype enabling support for it?&lt;/p&gt;
&lt;p&gt;The first detailed look one had at the UID project was through an internal UIDAI document marked ‘Confidential’ that was leaked through WikiLeaks in November 2009. That 41-page dossier is markedly different from the 170-page ‘Technology and Architecture’ document that UIDAI has on its website now, but also similar in some ways.&lt;/p&gt;
&lt;/td&gt;
&lt;td&gt;&lt;img src="http://www.hindustantimes.com/rf/image_size_960x540/HT/p2/2017/04/01/Pictures/_36723476-16e4-11e7-85c6-0f0e633c038c.jpg" /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p style="text-align: justify; "&gt;In neither of those is the need for Aadhaar properly established. Only  in November 2012 — after scholars like Reetika Khera pointed out UIDAI’s  fundamental misunderstanding of leakages in the welfare delivery system  — was the first cost-benefit analysis commissioned, by when UIDAI had  already spent ₹28 billion. That same month, Justice KS Puttaswamy, a  retired High Court judge, filed a PIL in the Supreme Court challenging  Aadhaar’s constitutionality, wherein the government has argued privacy  isn’t a fundamental right.&lt;/p&gt;
&lt;blockquote class="pullquote" style="text-align: justify; "&gt;Every time you use Aadhaar, you leave behind logs in the UIDAI databases. This means that the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Even today, whether the ‘deduplication’ process — using biometrics to ensure the same person can’t register twice — works properly is a mystery, since UIDAI hasn’t published data on this since 2012. Instead of welcoming researchers to try to find flaws in the system, UIDAI recently filed an FIR against a journalist doing so.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At least in 2009, UIDAI stated it sought to prevent anyone from “[e]ngaging in or facilitating profiling of any nature for anyone or providing information for profiling of any nature for anyone”, whereas the 2014 document doesn’t. As OnGrid’s services show, the very profiling that the UIDAI said it would prohibit is now seen as a feature that all, including private companies, may exploit.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;UID has changed in other ways too. In 2009, it was as a system that never sent out any information other than ‘Yes’ or ‘No’, which it did in response to queries like ‘Is Pranesh Prakash the name attached to this UID number’ or ‘Is April 1, 1990 his date of birth’, or ‘Does this fingerprint match this UID number’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With the addition of e-KYC (wherein UIDAI provides your demographic details to the requester) and Aadhaar-enabled payments to the plan in 2012, the fundamentals of Aadhaar changed. This has made Aadhaar less secure.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Security Concerns&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;With Aadhaar Pay, due to be launched on April 14, a merchant will ask you to enter your Aadhaar number into her device, and then for your biometrics — typically a fingerprint, which will serve as your ‘password’, resulting in money transfer from your Aadhaar-linked bank account.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Basic information security theory requires that even if the identifier (username, Aadhaar number etc) is publicly known — millions of people names and Aadhaar numbers have been published on dozens of government portals — the password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In 2015, researchers in Carnegie Mellon captured the iris scans of a driver using car’s side-view mirror from distances of up to 40 feet. In 2013, German hackers fooled Apple iOS’s fingerprint sensors by replicating a fingerprint from a photo taken off a glass held by an individual. They even replicated the German Defence Minister’s fingerprints from photographs she herself had put online. Your biometrics can’t be kept secret.&lt;/p&gt;
&lt;blockquote class="pullquote" style="text-align: justify; "&gt;Typically, even if your username (in this case, Aadhaar number) is publicly known, your password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;In the  US, in a security breach of 21.5 million government employees’ personnel  records in 2015, 5.2 million employees’ fingerprints were copied. If  that breach had happened in India, those fingerprints could be used in  conjunction with Aadhaar numbers not only for large-scale identity  fraud, but also to steal money from people’s bank accounts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;All ‘passwords’ should be replaceable. If your credit card gets stolen, you can block it and get a new card. If your Aadhaar number and fingerprint are leaked, you can’t change it, you can’t block it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The answer for Aadhaar too is to choose not to use biometrics alone for authentication and authorisation, and to remove the centralised biometrics database. And this requires a fundamental overhaul of the UID project.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aadhaar marks a fundamental shift in citizen-state relations: from ‘We the People’ to ‘We the Government’. If the rampant misuse of electronic surveillance powers and wilful ignorance of the law by the state is any precedent, the future looks bleak. The only way to protect against us devolving into a total surveillance state is to improve rule of law, to strengthen our democratic institutions, and to fundamentally alter Aadhaar. Sadly, the political currents are not only not favourable, but dragging us in the opposite direction.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/hindustan-times-pranesh-prakash-april-3-2017-aadhaar-marks-a-fundamental-shift-in-citizen-state-relations'&gt;https://cis-india.org/internet-governance/blog/hindustan-times-pranesh-prakash-april-3-2017-aadhaar-marks-a-fundamental-shift-in-citizen-state-relations&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>pranesh</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Biometrics</dc:subject>
    
    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-04-04T16:10:06Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/the-times-of-india-april-6-2017-umesh-yadav-bengaluru-cops-twitter-handle-in-ethical-storm">
    <title>Bengaluru cops' twitter handle in ethical storm</title>
    <link>https://cis-india.org/internet-governance/news/the-times-of-india-april-6-2017-umesh-yadav-bengaluru-cops-twitter-handle-in-ethical-storm</link>
    <description>
        &lt;b&gt;The city's privacy activists are among the most strident in trying to prevent the Union government from gaining unprecedented access to citizens' personal information through Aadhaar. But in their own backyard, Bengaluru police have been publishing on Twitter the phone numbers of thousands of citizens reporting various crimes such as gambling on the streets, random quarrels and harassment of women.&lt;/b&gt;
        &lt;p&gt;The article by Umesh Yadav was &lt;a class="external-link" href="http://economictimes.indiatimes.com/news/politics-and-nation/bengaluru-cops-twitter-handle-in-ethical-storm/articleshow/58042187.cms"&gt;published in the Times of India&lt;/a&gt; on April 6, 2017. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The police control room has put out more than 46,000 tweets since April  2015 containing the numbers of complainants calling the emergency number  100. The phone numbers of citizens reaching the control room through  Bengaluru police's new emergency &lt;a href="http://economictimes.indiatimes.com/topic/mobile-application" target="_blank"&gt;mobile application&lt;/a&gt;, Suraksha, too are being published through this handle.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Thankfully, the Twitter handle, @BCPCR, had a mere 66 followers as on  the evening of April 5, nearly 30 per cent of which were various police  stations in the city. On Wednesday evening, the police closed the  account for public view.  &lt;br /&gt; &lt;br /&gt; ET has screenshots of tweets from  the account. A senior police officer at Bengaluru police's Command  Control was unapologetic for the breach of privacy. The tweets are  generated automatically and meant to `show' the number of calls received  by the control room and the number of people using the new app, he  said.  &lt;br /&gt; &lt;br /&gt; On the matter of compromising the safety of the  complainants, the officer said, "It is obvious that the accused will  know who registered the complaint and privacy does not matter here."  &lt;br /&gt; &lt;br /&gt; Expectedly, privacy and law experts are indignant.  &lt;br /&gt; &lt;br /&gt; "This is horrible and unpardonable," said Supreme Court advocate KV  Dhananjay. "The fact that the police did not consider it necessary to  ask for permission before broadcasting someone's identity shows how  insensitive the Police Commissioner's office has become to the privacy  concern of our society." Pranesh Prakash, Policy Director at the &lt;a href="http://economictimes.indiatimes.com/topic/Centre-for-Internet-and-Society" target="_blank"&gt;Centre for Internet and Society&lt;/a&gt; and who has been at the forefront of the campaign against any potential  misuse of Aadhaar, too said the "police officer who ordered to create  such an account should be held responsible if any harm comes to a  complainant."  &lt;br /&gt; &lt;br /&gt; Complainants ET spoke with were startled  about the abuse of their privacy. Gowda, a complainant, who had informed  the police control room about the sale of cigarettes within 100 metres  of a school, had specifically requested the police to not disclose his  identity.  &lt;br /&gt; &lt;br /&gt; "(This is why) it is better to keep quiet when  you see lawbreakers," he said on hearing that Bengaluru police had  published his phone number on Twitter.  &lt;br /&gt; &lt;br /&gt; "This is injustice  and this is the reason why people are scared to inform the police of  crimes. If the accused send people to beat me, what should I do?"  Dhanusha had called the control room about some teenagers who were  teasing girls at a bus stop. The police arrived and took the boys in.  She, too, is now worried. "If the accused get my number, they are going  to harass me. The police do not have any right to display our phone  numbers in public."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/the-times-of-india-april-6-2017-umesh-yadav-bengaluru-cops-twitter-handle-in-ethical-storm'&gt;https://cis-india.org/internet-governance/news/the-times-of-india-april-6-2017-umesh-yadav-bengaluru-cops-twitter-handle-in-ethical-storm&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Social Media</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-04-07T02:38:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/india-today-neha-vashishth-april-6-2017-privacy-what-bengaluru-police-leaks-phone-numbers-on-twitter">
    <title>Privacy, what? Bengaluru police leaks 46,000 phone numbers on Twitter</title>
    <link>https://cis-india.org/internet-governance/news/india-today-neha-vashishth-april-6-2017-privacy-what-bengaluru-police-leaks-phone-numbers-on-twitter</link>
    <description>
        &lt;b&gt;Bengaluru police made the biggest goof up of all time by releasing private information of people who called 100 to complain since April 2015 and was seemingly unapologetic about the breach of privacy.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;The article by Neha Vashishth was &lt;a class="external-link" href="http://indiatoday.intoday.in/story/bengaluru-police-twitter-breach-privacy-phone-numbers/1/922183.html"&gt;published by India Today&lt;/a&gt; on April 6, 2017. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;We all love our privacy, don't we?&lt;/p&gt;
&lt;p&gt;We  put various locking apps and hide our private pictures on Facebook,  Twitter etc and only share what we want the world to see. But sometimes  even after our countless efforts, we end up losing our information on  the internet. After all, a breach of privacy is the greatest nightmare  one can have.&lt;/p&gt;
&lt;p&gt;Bengaluru police goofed up too when it came to  handling privacy concerns of Bengaluru citizens. The police department  posted phone numbers of thousands of citizens on their Twitter handle  (@BCPCR) who called 100 and complained against harassment, quarrels, and  gambling etc.&lt;/p&gt;
&lt;p&gt;The police posted over 46,000 tweets online since  April 2015 sharing information of people who called on 100 along with  the app known as 'Suraksha' to lodge complaints. The account was made  private as soon as the matter escalated&lt;b&gt;.&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The police was unapologetic regarding the matter and said that the tweets were auto-generated from their twitter handle @BCPCR.&lt;/p&gt;
&lt;p&gt;Pranesh  Prakash, Policy Director at the Centre for Internet and Society said  the "police officer who ordered to create such an account should be held  responsible if any harm comes to a complainant."&lt;/p&gt;
&lt;p&gt;This not only  created a major breach of privacy of complainants but also risked their  lives. This incident only proves that privacy and sensitivity of the  matter has vanished in today's time.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/india-today-neha-vashishth-april-6-2017-privacy-what-bengaluru-police-leaks-phone-numbers-on-twitter'&gt;https://cis-india.org/internet-governance/news/india-today-neha-vashishth-april-6-2017-privacy-what-bengaluru-police-leaks-phone-numbers-on-twitter&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-04-07T02:57:49Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
