<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 931 to 945.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-chetana-divya-vasudev-october-4-2016-an-appening-world"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/privacy/privacy-aba-conference"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/all-india-privacy-symposium-webcast"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/all-india-privacy-symposium"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/privacy-symposium"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/economic-times-may-29-mugdha-variyar-alexas-recording-leak-in-us-echoes-privacy-issues-here"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/first-post-october-12-2017-ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/bloomberg-quint-nishant-sharma-september-27-2018-after-sc-setback-fintech-firms-await-clarity-on-aadhaar"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/biometric-update-june-26-2021-chris-burt-advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/abli-privacy-workshop"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/news/aadhaar-truth"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/electronic-frontier-foundation-jyoti-panday-june-1-2017-aadhaar-ushering-in-a-commercialized-era-of-surveillance-in-india"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-chetana-divya-vasudev-october-4-2016-an-appening-world">
    <title>An 'app'ening world</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-chetana-divya-vasudev-october-4-2016-an-appening-world</link>
    <description>
        &lt;b&gt;A ‘forward’ has been doing the rounds on WhatsApp about the privacy concerns relating to that instant messaging app; it’s asking for permission to share user data with Facebook.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Chetana Divya Vasudev was published in &lt;a class="external-link" href="http://www.deccanherald.com/content/573852/an-appening-world.html"&gt;Deccan Herald&lt;/a&gt; on October 4, 2016. Rohini was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;In the WhatsApp notification, asking users to agree to the terms and  conditions again, the option to share these user details to help improve  ads on Facebook is already selected. Those who are uncomfortable  parting with this information have to uncheck it before clicking on the  ‘I agree’ button.&lt;br /&gt;&lt;br /&gt;“Agreeing to this would mean Facebook can see  who you’re chatting with and what you’re talking about,” says tech  expert Chinmayi S K. “So if you’re talking about cat adoption, the ads  displayed on the side could be relevant to that.”&lt;br /&gt;&lt;br /&gt;When it comes  to other smartphone apps, she cites Zomato as an example. “It has been  asking for user history — previous orders and other such details — to  make recommendations,” she says. “This comes with the app update.  Tinder, too, is asking for your location using wifi, which is more  accurate than the GPRS location.”&lt;br /&gt;It’s alright to agree to these  permissions, she says, so long as you’re aware of what you’re signing up  for and how that data is going to be used.&lt;br /&gt;&lt;br /&gt;If you have qualms  about agreeing to this, there are usually alternatives you can find,  adds Rohini Lakshane, program officer, Centre for Internet and Society.  “If not, it’s usually a trade-off: you have to see how much you want the  app,” she points out.&lt;br /&gt;&lt;br /&gt;There are, however, other apps that might be duplicates asking for access to your device or files, cautions Chinmayi. &lt;br /&gt;&lt;br /&gt;“If a cooking app, a simple one that gives you recipes, asks for your call logs or other files, for example,” she says.&lt;br /&gt;&lt;br /&gt;A  discerning user, interjects Rohini, will check for permission to access  files or functions that are not strictly necessary for the features the  app supports. “I don’t want to name anything but some e-commerce and  travel apps ask to access your browsing history and the other apps or  networks you’re connect to. It could be to serve you contextual ads or  content, like Zomato, or to sell it to someone. You never know,” she  says. However, some devices or versions of the Android OS let you  control what permissions you enable, she informs.&lt;br /&gt;&lt;br /&gt;Aeronautical  engineer Pavan Raj P V says he takes care not to compromise on his  safety, whenever possible. “But there are a few apps that I have on my  phone no matter what — Facebook, WhatsApp, LinkedIn, Instagram. Most of  them auto-update and require no extra permissions.”&lt;br /&gt;&lt;br /&gt;However, he  has noticed that LinkedIn asks for access to Gmail contacts that you  could accidentally accept “if you’re logging in mechanically”.&lt;br /&gt;&lt;br /&gt;Varsha  C V, communications specialist at Karnataka State Highways Improvement  Project, says, “Last month, my husband asked me to download a Google app  for free calls that required all sorts of permissions, such as access  to your phone logs. When Skype offers the same features without asking  for all this, why should anyone use this app?”&lt;br /&gt;&lt;br /&gt;She believes  privacy in India is not taken as seriously as it should be. “You should  keep in mind that if you’re giving them access to your contacts, you’re  also compromising on others’ privacy,” she points out.&lt;br /&gt;&lt;br /&gt;Lokanand, a  sound engineer, admits to not paying attention to what he’s giving apps  access to. “I’m no expert but if you ask me, you download apps because  they are useful. So I don’t really bother about what I’m saying yes to.”&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-chetana-divya-vasudev-october-4-2016-an-appening-world'&gt;https://cis-india.org/internet-governance/news/deccan-herald-chetana-divya-vasudev-october-4-2016-an-appening-world&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>WhatsApp</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2016-10-05T00:24:19Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/privacy/privacy-aba-conference">
    <title>American Bar Association Online Privacy Conference: A Report</title>
    <link>https://cis-india.org/internet-governance/blog/privacy/privacy-aba-conference</link>
    <description>
        &lt;b&gt;On 10 November 2010, I attended an American Bar Association online conference on 'Regulating Privacy Across Borders in the Digital Age: An Emerging Global Consensus or Vive la Difference'. The panalists addressed many important global privacy challenges and spoke about the changes the EU directive is looking to take. &lt;/b&gt;
        
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;On 10 November, I attended an American Bar Association online conference on “Regulating Privacy Across Borders in the Digital Age: An Emerging Global Consensus or Vive la Difference.” The panel was made up of:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Lisa Sotto, a private practitioner in the US&lt;/li&gt;&lt;li&gt;Billy Hawkes, Commissioner of Data Protection, Ireland&lt;/li&gt;&lt;li&gt;Bojana Bellamy, Director of Data Privacy, London, UK&lt;/li&gt;&lt;li&gt;Hugh Stevenson, Deputy Director of the Federal Trade Commission, US&lt;/li&gt;&lt;li&gt;&amp;nbsp;Jennifer Stoddart, Privacy Commissioner, Canada.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The panelists shared their insight into many issues, including the challenges that cloud computing, behavioural advertising, and cross-border data transfer pose to privacy.&amp;nbsp; The panel also spoke on the need to address concerns of enforcement, data breach, accountability, and harmonization of data protection policies. The conference was very informative, and brought up many points that, as India moves forward with a privacy legislation, should be considered and given thought about.&lt;/p&gt;
&lt;h3&gt;Technology Concerns: Cloud Computing, Behavioural Advertising, and Cross- border Data Transfer&lt;/h3&gt;
&lt;p&gt;When speaking about the concerns of cloud computing, behavioural advertising, and cross-border data transfer – the panel was in agreement that privacy policies need to move beyond paper to practice.&amp;nbsp; They questioned whether broad national law can actually address the privacy concerns associated with these issues, or whether internal, specific policies are more effective at protecting data being outsourced to the cloud, passed through the Internet, and sent across borders. Specifically addressing cloud computing internal policies have the potential to be more effective, because data in the cloud is essentially nowhere; it does not reside in one jurisdiction, and thus it is difficult to establish which countries’ laws apply to the data. Additionally,&amp;nbsp; if there is a breach in data, the onus at the end of the day falls on the company that was in possession of the data the data breach.&amp;nbsp; Though internal policies could also be used to address behavioural advertising, the lack of consumer awareness limits how effective a self-regulating program can be. Hugh Stevenson suggested another possibility - creating a system analogous to the “do not call registry” for websites – something like “do not track.” This would allow consumers to opt out of being tracked by cookies etc. on a websites, and force websites to be transparent about their collection and retention of data. Another solution discussed that could work to move policies beyond paper to practice, was the emerging trend&amp;nbsp; of “privacy by design". “Privacy by design” is a mechanism applied by technology manufacturing and technology providing companies where companies will assess privacy risks before they offer a service, or before a product goes onto the market. This might mean a software company or service provider will need a seal before selling their products that indicates the product or service meets a certain privacy standard. If enforced effectively, the system of a seal could be especially effective, because it creates a visual indicator of privacy - allowing consumers to easily and quickly recognize what products are more privacy risky than others, and easily find reliable and secure data processors.&amp;nbsp; The ability of the privacy seal to be applied to all services and sectors, would be particularly useful in a sectoral system like the US, where companies that collect data, but&amp;nbsp; are not apart of the regulated sectors (financial, health, etc) do not come within the purview of the privacy protecting laws.&lt;/p&gt;
&lt;h3&gt;Privacy Seals Globally?&amp;nbsp; Privacy Seals in India?&lt;/h3&gt;
&lt;p&gt;If this system of a privacy seal becomes widely used, it will be interesting to see the effect that it has on the international community, and subsequently – the Indian consumer. Even though India does not have a privacy legislation, nor a heightened concern over personal privacy,&amp;nbsp; the Indian consumer does consume American-developed software, phones, computers and other technologies. Perhaps as a “privacy seal” begins to be seen on foreign products used in India, it will create pressure on domestic manufacturers and service providers to meet similar standards with their products. Furthermore, perhaps foreign countries will not want to engage in trade with a company if that company does not use the “privacy seal". Similar pressure is being placed on Chinese-made technologies. For example, the reputation that Chinese phones have of being dangerous and cheap has led some countries, like Australia, to place bans on the phones coming into their borders. Essentially a privacy seal&amp;nbsp; could provide sufficient economic incentives and pressures on companies globally to ensure that their products and practices adequately protect consumer privacy.&lt;/p&gt;
&lt;h3&gt;Accountability:&lt;/h3&gt;
&lt;p&gt;In addition to internal policies and seals as ways to push privacy protection beyond theory and into practice, the panel heavily emphasized the need for accountability. Accountability, according to&amp;nbsp; Bojana Bellamy – the EU Data Privacy Director, is&amp;nbsp; increasingly necessary because data is constantly being sent and processed in multiple countries and places across the globe. How to create a greater level of accountability amongst organizations has been a subject of much discussion. Currently the EU is looking at adding an“accountability principle”&amp;nbsp; to the directive. The directive is defining accountability as: showing how responsibility is exercised and making this verifiable -or in simpler terms – compliance with principles in the data protection field. The accountability principle that is being proposed&amp;nbsp; would be comprised of two&amp;nbsp; requirements. One requirement would obligate the&amp;nbsp; data controllers to implement appropriate and effective measures that made sure the principles and obligations of the Directive were being put into effect by organizations. The second would be to require that data controllers demonstrate that these measures have been taken. In practice, this would translate into scalable programs such as the requirement of a privacy impact assessment,monitoring,sanctions, and internal and external audits&amp;nbsp; The legal architecture of the accountability mechanism would be two-tiered. One tier would consist of the basic statutory requirement that would be binding for all data controllers; the second would include voluntary accountability systems.&amp;nbsp; This would also mean that the data controllers would need to strengthen their internal arrangements. Further accountability measures considered by the Directive working party include: Establishment of internal procedures prior to the creation of new personal data processing operations, setting up written and binding data protection policies to be considered and applied to new data processing operations, mapping of procedures to endure proper identification of all data processing operations and maintenance of an inventory of data processing operations, appointment of&amp;nbsp; data protection officer, offering adequate data protection, training, and education to staff members.&lt;/p&gt;
&lt;h3&gt;Data Breaches:&lt;/h3&gt;
&lt;p&gt;The panel next discussed data breaches. From the example of the UK, where in 2007 the government lost 24 million records from the Child Benefit Database – clearly date breaches are a continual, often very serious problem.&amp;nbsp; Few people though, realize the extent to which data breaches happen (on their own personal data) and the actual consequences of the breaches, because countries do not have a&amp;nbsp; well defined data breach policies set in place. There are a handful of European countries, like France and Germany, and some American states, like California, that&amp;nbsp; have included data breach requirements into their laws. Also,&amp;nbsp; Despite this, there are no broad statutes for data breach notification in the US or the EU.&amp;nbsp; Also in 2009 the E-Privacy Directive, which applies to ISPs, telecommunication networks, and other electronic communications services, made it mandatory for certain data breaches to be reported.. Whether data breach notification should be made a requirement through legislation is a question many countries are facing. Some countries, like Canada, rely on self-regulation for enforcement of data breaches. Jennifer Stoddart, the data commissioner from Canada, spoke about how self regulation in Canada works. One of the mechanisms that makes self-regulation so effective is the media.&amp;nbsp; If a data breach occurs, through bad press, the&amp;nbsp; media causes&amp;nbsp; the social and monetary costs to increase, so that companies will want to&amp;nbsp; prevent data breaches. The privacy commission of Canada works to help companies remedy the breaches when they occur, but focuses mainly on working with companies to prevent a breach from taking place at all. Challenges and question that self regulation face are:&lt;/p&gt;
&lt;p&gt;Will companies work to be less transparent and avoid notification despite the severity of the breach, because of the repercussions?&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;How will the&amp;nbsp; balance between over-reporting breaches with under-reporting breaches be maintained?&lt;/li&gt;&lt;li&gt;Even if there is a social incentive to provide notification of breach, is it adequate&amp;nbsp; enough to ensure that the notification is comprehensive and that proactive steps are taken by the organization to prevent further breach?&lt;/li&gt;&lt;li&gt;If bad media is the main form of penalty for companies – is this enough penalty, and is it able to take into consideration the context of each privacy breach?&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;These questions along with the growing number of breaches that are occurring have pushed the EU and other countries to consider&amp;nbsp; integrating data breach statutes into broad legislation.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;&amp;nbsp;E-Privacy Directive Breach Notification:&lt;/h3&gt;
&lt;p&gt;Under the E-Privacy Directive the definition of a personal data breach is “breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted or otherwise processed in connection with provision of a publicly available electronic communications service in the Community.” Currently the system in the EU is broken down into a two tiered system – a breach notification by the organization to the data controller is the first level. This level includes breaches that have occurred, but do not necessarily harm an individual. The second tier is if the breach impacts the subscriber or individual, than the&amp;nbsp; individual must be notified of the nature of the breach, and recommendations made of measures to mitigate the possible adverse effects of the breach.&amp;nbsp; If the breach is so large that individual notice is impractical, notice of the breach must be posted in the media. Failure to notify or incorrect notification results in sanctions.&amp;nbsp; In the UK, data breach notification must include:&lt;/p&gt;
&lt;p&gt;1.&amp;nbsp;&amp;nbsp;&amp;nbsp; The type of information and compromised number of records&lt;/p&gt;
&lt;p&gt;2.&amp;nbsp;&amp;nbsp;&amp;nbsp; The circumstances of the loss, release, or corruption&lt;/p&gt;
&lt;p&gt;3.&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;Actions taken to minimize or mitigate the effect on individuals involved including whether they have been informed&lt;/p&gt;
&lt;p&gt;4.&amp;nbsp;&amp;nbsp;&amp;nbsp; details of how the breach is being investigated,&lt;/p&gt;
&lt;p&gt;5.&amp;nbsp;&amp;nbsp;&amp;nbsp; whether any other regulatory bodies have been informed and, if so, their responses&lt;/p&gt;
&lt;p&gt;6.&amp;nbsp;&amp;nbsp;&amp;nbsp; remedial actions taken to prevent future occurrences and any other information that may assist the ICO in making an assessment.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Accountability, breach notification: What material should India think about for a legal privacy structure?&lt;/h3&gt;
&lt;p&gt;Lawrence Friedman once explained that legal systems are living organisms – Bills are constantly being amended, passed, and retracted in order to make the legal structure that governs a society reflect the ethos of that society. Thus, when conceptualizing a new piece of legal legislation it is important to look at what purpose that legislation&amp;nbsp; is going to serve, and if that purpose&amp;nbsp; reflects the ideas, values, attitudes, and expectations that a society has. India is a nation that has enacted statutes and regulations for responding to cultural and economic changes against a backdrop of widely-dispersed population groups with deeply-engrained traditions of government and management. This has led to incongruities, for example, there are strong requirements for government transparency, but at the same time there is a common perception that bribery is necessary to prompt official action.&amp;nbsp; There are laws to protect certain rights, but the average person who takes action&amp;nbsp; will never be afforded redress. Thus, India faces both similar and different challenges that the EU and Western countries are face in concern with privacy.&amp;nbsp; One of the greatest privacy challenges in India today, despite having&amp;nbsp; adopted technology, habits, and practices that&amp;nbsp; put&amp;nbsp; privacy at risk, is&amp;nbsp; the common perception&amp;nbsp; that India does not have any&amp;nbsp; privacy issues. Because it is believed that privacy is not at risk, there is a lack of awareness and understanding as to how to prevent privacy violations. Though the breach notification and accountability components that were discussed in the meeting are very detail-oriented mechanisms, they raise a fundamental question about legal architecture and context. When forming a privacy legislation, a few broad questions that India needs to consider are:&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp; Does it want a broad legislation, one that could limit business and trade (unless potential trading partners demand such legislation), or sector-based legislations, which risk being too tailored and difficult to harmonize?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; If India wants a broad privacy framework how will this be set up?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; What will be the tools used for civil education?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; How will enforcement take place ?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; Is self regulated accountability or statuary accountability better?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; Will there be a privacy tribunal?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; How will data be categorized?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; Will breaches be notified?&lt;/p&gt;
&lt;p&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;Will standardized privacy policies be created?&lt;/p&gt;
&lt;p&gt; As Hugh Stevenson, the commissioner from the FTC, described - one of the greatest benefits of breach notification was&amp;nbsp; the awareness of privacy that it has brought. As individuals are notified that their information has been compromised, they are becoming more aware of how technologies work and how their information is processed, and what risks are involved and what protective measures they should take. Looking at the prospect of enhanced awareness from making data breach notification mandatory, it seems that it can only be a positive step for India to take towards raising awareness and understanding of privacy. The notification of breach could be required to specifically include a description of why the breach took place, and the steps that individuals could take to further protect their data. A concern that has been voiced - is whether a comprehensive legislation could be implemented? And should India be looking to enact such a comprehensive and detailed legislation when there is no existing privacy legislation to build off of, and no deep culture of privacy?&amp;nbsp;&amp;nbsp; To these concerns I can only speculate that there is always a balance between being overly ambitious in a legislation, and too conservative. It seems that enforcement will in fact always be a challenge in India, and that part of policy-making needs to address this challenge, rather than avoid it.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/privacy/privacy-aba-conference'&gt;https://cis-india.org/internet-governance/blog/privacy/privacy-aba-conference&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>elonnai</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2012-03-21T10:08:36Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/all-india-privacy-symposium-webcast">
    <title>All India Privacy Symposium Webcast</title>
    <link>https://cis-india.org/all-india-privacy-symposium-webcast</link>
    <description>
        &lt;b&gt;Welcome to the Webcast of the All India Privacy Symposium at the India International Centre in New Delhi on 4 February 2012. &lt;/b&gt;
        &lt;img src="https://cis-india.org/home-images/top1.jpg/image_preview" title="All India Privacy Symposium" height="87" width="562" alt="All India Privacy Symposium" class="image-inline image-inline" /&gt;
&lt;table class="plain"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;Welcome &amp;amp; Introduction to Privacy India&lt;/h3&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elonnai Hickok, (Policy Advocate, Privacy India)&amp;nbsp;&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/elonnai.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;Panel I: Privacy and Transparency&lt;/h3&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Moderator:&lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Sunil Abraham, (Executive Director, Centre for Internet &amp;amp; Society)&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel1_mod.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Poster:&lt;/strong&gt; Srishti Goyal, (Law Student)&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel1_poster.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;br /&gt;&lt;strong&gt;Panelists:&lt;/strong&gt; &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel1_panalist.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;
&lt;ul&gt;&lt;li&gt;Ponnurangam K, (Assistant Prof, IIIT New Delhi)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Chitra Ahanthem, (Journalist, Imphal)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Nikhil dey, (Social &amp;amp; Political Activist)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Deepak Maheshwari, (Director Corporate Affairs, Microsoft)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Gus Hosein, (Executive Director, Privacy International, UK)&lt;/li&gt;&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;Panel II: Privacy and E-Governance Initiatives&lt;/h3&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Moderator: &lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Sudhir Krishnaswamy (Professor, Azim Premji University) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel2_mod.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Poster:&lt;/strong&gt; Adrija Das, &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel2_poster.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;br /&gt;
&lt;strong&gt;Panelists:&lt;br /&gt;&lt;/strong&gt;
&lt;ul&gt;&lt;li&gt;Anant Maringanti, (Independent Social Researcher)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Usha Ramanathan, (Advocate&amp;amp;Social Activist)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Gus Hosein, (Executive Director, Privacy International, UK)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Apar Gupta, (Advocate, Supreme Court of India)&lt;/li&gt;&lt;/ul&gt;
&lt;ul&gt;&lt;li&gt;Elida Kristine Undrum Jacobsen (Researcher at the International Peace Research Institute, Oslo)&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;Panel III: Privacy and National Security&lt;/h3&gt;
&lt;strong&gt;Moderator: &lt;br /&gt;&lt;/strong&gt;
&lt;ul&gt;&lt;li&gt;Sunil Abraham, (Executive Director, Centre for Internet &amp;amp; Society) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel3_mod.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Poster:&lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Suchitra Menon, (Law Student)&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel3_poster.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Panelists: &lt;/strong&gt;&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel3_panalist.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;
&lt;ul&gt;&lt;li&gt;Menaka Guruswamy, (Advocate, Supreme Court, New Delhi)&lt;/li&gt;&lt;li&gt;Prasanth Sugathan, (Legal Counsel, Software Freedom Law Center)&lt;/li&gt;&lt;li&gt;Oxblood Ruffin, (Cult of the Dead Cow Security and Publishing Collective) &lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;Panel IV: Privacy and Banking&lt;/h3&gt;
&lt;strong&gt;Moderator: &lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Prashant Iyengar (Associate Professor, Jindal Law University) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel4_mod.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Poster: &lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Malavika Chandu &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel4_poster.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Panelists:&lt;/strong&gt;&lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel4_panalist.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;
&lt;ul&gt;&lt;li&gt;M R Umarji, (Chief Legal Advisor, IBA)&lt;/li&gt;&lt;li&gt;N A Vijayashankar, (Cyber Law Expert)&lt;/li&gt;&lt;li&gt;Malavika Jayaram, (Advocate, Bangalore)&lt;/li&gt;&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;&amp;nbsp;Panel V: Privacy and Health&lt;/h3&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Moderator:&lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Ashok Row Kavi, (Journalist &amp;amp; LGBT Activist) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel5_mod.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Poster:&lt;/strong&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Danish Sheikh, (Alternative Law Forum) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel5_poster.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;strong&gt;Panelists:&lt;/strong&gt; &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/panel5_panalist.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;
&lt;ul&gt;&lt;li&gt;K K Abraham, (President, Indian Network for People with HIV)&lt;/li&gt;&lt;li&gt;Dr. B S Bedi, (Advisor, CDAC &amp;amp; Media Lab Asia)&lt;/li&gt;&lt;li&gt;Raman Chawla, (Senior Advocacy Officer, Lawyers Collective) &lt;/li&gt;&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;h3&gt;&amp;nbsp;The Way Forward&lt;/h3&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;nbsp;Natasha Vaz, (Policy Advocate, Privacy India) &lt;a class="external-link" href="http://www.24framesdigital.com/cis/webcast/040212/natasha.html"&gt;&lt;img src="https://cis-india.org/home-images/vdolead.gif/image_preview" alt="video1" class="image-inline" title="video1" /&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/all-india-privacy-symposium-webcast'&gt;https://cis-india.org/all-india-privacy-symposium-webcast&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2012-02-08T08:20:08Z</dc:date>
   <dc:type>Page</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/all-india-privacy-symposium">
    <title>All India Privacy Symposium</title>
    <link>https://cis-india.org/internet-governance/all-india-privacy-symposium</link>
    <description>
        &lt;b&gt;Are we citizens or subjects? Experts gather in Delhi for public symposium on privacy, transparency, e-governance and national security in India.

&lt;/b&gt;
        
&lt;p&gt;Following 18 months of research by Privacy India, the Centre for Internet and Society and the Society in Action Group, with support from London-based Privacy International, the groups today held an All India Privacy Symposium at the India International Centre in New Delhi. Speakers included Supreme Court Advocate Menaka Guruswamy, Microsoft Director of Corporate Affairs Deepak Maheshwari, social researcher and activist Usha Ramanathan, journalist Saikat Datta and former Chief of RAW Hormis Thorakan.&lt;/p&gt;
&lt;p&gt;A few themes recurred across all five panels (Privacy and Transparency, Privacy and E-Governance Initiatives, Privacy and National Security, Privacy and Banking, and Privacy and Health). Perhaps the most prominent was the repeated allegation that the Indian government' technological illiteracy is putting its citizens at risk. One panelist described how an RTI request had recently revealed that the government had no idea how many of its own computers had been hacked or how much data had been stolen – even though this information has been in the public domain since the Wikileaks diplomatic cable releases.&lt;/p&gt;
&lt;p&gt;The increased use of public-private partnerships and outsourcing was also a major cause for concern. Public money is being funneled into privately-held commercial enterprises – which, unlike public bodies, are not subject to RTI requests – and spent on e-governance initiatives like UID. Social researcher Anant Maringati spoke of a "hybrid world" in which government projects were fulfilled by completely unaccountable private actors. Advocate Malavika Jayaram remarked that, while private companies tend to have far greater technological expertise than government officials, they are ultimately motivated by profit rather than public benefit; we should therefore ask ourselves whether they can really be trusted with our information.&amp;nbsp;&lt;/p&gt;
&lt;table class="plain" align="center"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://cis-india.org/home-images/picture3.jpg/image_preview" alt="Privacy Symposium" class="image-inline image-inline" title="Privacy Symposium" /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Government surveillance for the purposes of crime prevention also 
came under scrutiny, when Saikat Datta described how he himself had been
 put under illegal surveillance by an unauthorized intelligence agency. 
He warned of the dangers of excessive wiretapping, a practice that 
currently generates such a “mountain” of information that anything with 
real intelligence value tends to be ignored until it is too late, as 
happened with the Mumbai bombings in 2008. It is clear that the Indian 
government’s surveillance and interception programmes far exceed what is
 necessary for legitimate law enforcement.&lt;/p&gt;
&lt;p&gt;
Overall, panelists at the conference painted a vivid picture of India
 as a state that has made a habit of invading the privacy of individuals
 on a massive scale in the name of public benefit and law enforcement. 
Yet there is a clear sense that the benefits to society are not 
outweighing the costs to the individual. As Usha Ramanathan commented: 
“The question is, do we think of ourselves as citizens – or as 
subjects?”&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cis-india.org/all-india-privacy-symposium-webcast" class="external-link"&gt;See the webcast of the event here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/all-india-privacy-symposium'&gt;https://cis-india.org/internet-governance/all-india-privacy-symposium&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Natasha Vaz</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2012-03-01T06:16:53Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/privacy-symposium">
    <title>All India Privacy Symposium</title>
    <link>https://cis-india.org/internet-governance/privacy-symposium</link>
    <description>
        &lt;b&gt;Privacy India in partnership with the International Development Research Centre, Canada, Society in Action Group, Gurgaon, Privacy International, UK and Commonwealth Human Rights Initiative is organizing the All India Privacy Symposium at the India International Centre, New Delhi on Saturday, February 4, 2012.&lt;/b&gt;
        
&lt;p&gt;Since June 2010, Privacy India has been engaging in discussions with policy makers, the public and sectoral experts about privacy in India. The discussions have ranged from topics of identity and privacy, to minority rights and privacy, and consumer privacy. The findings of our research show that privacy was a neglected area of study for India in the past, however, this is changing. Advancements in technology, the introduction of e-governance initiatives like the National Fibre Optic Network, the introduction of new legislations, and debates surrounding national security, have brought privacy debates to the forefront in India. Although currently sectoral legislation deals with privacy issues, e.g., the Telegraph Act or RBI guidelines for banking, India has just begun to consider a horizontal legislation that deals comprehensively with privacy across all contexts. This conference is an opportunity to look forward to what could be the future scope of privacy in India.&lt;/p&gt;
&lt;p&gt;Privacy India was set up in collaboration with the Centre for Internet and Society, Bangalore and Society in Action Group, Gurgaon, under the auspices of an international organization ‘Privacy International’. Privacy International is a non-profit group that provides assistance to civil society groups, governments, international and regional bodies, the media and the public in a number of countries. For more info, visit its &lt;a class="external-link" href="https://www.privacyinternational.org/"&gt;website. &lt;br /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is a public meeting. For participation in the event, get in touch with Elonnai (&lt;a class="external-link" href="mailto:elonnai@cis-india.org"&gt;elonnai@cis-india.org&lt;/a&gt;)&lt;/p&gt;
&lt;h2&gt;Symposium Advisors&lt;/h2&gt;
&lt;p&gt;Sunil Abraham, Centre for Internet &amp;amp;Society (&lt;a href="https://cis-india.org/" class="external-link"&gt;www.cis-india.org&lt;/a&gt;)&lt;br /&gt;Rajan Gandhi, Society in Action Group&lt;br /&gt;Phet Sayo, IDRC (&lt;a class="external-link" href="http://www.idrc.org/"&gt;www.idrc.org&lt;/a&gt;)&lt;br /&gt;Gus Hosein, Privacy International (&lt;a class="external-link" href="http://www.privacyinternational.org/"&gt;www.privacyinternational.org&lt;/a&gt;)&lt;br /&gt;
Sudhir Krishnaswamy, Centre for Law and Policy Research, Bangalore (&lt;a class="external-link" href="http://www.clpr.org.in/"&gt;www.clpr.org.in&lt;/a&gt;)&lt;br /&gt;
Vickram Crishna, Privacy International (&lt;a class="external-link" href="http://www.privacyinternational.org/"&gt;www.privacyinternational.org&lt;/a&gt;)&lt;/p&gt;
&lt;h2&gt;Agenda &lt;strong&gt;&lt;br /&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;table class="plain"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;09:30- &lt;br /&gt;10:00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Registration&lt;/strong&gt;&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:00- &lt;br /&gt;10:15&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Welcome &amp;amp; Introduction to Privacy India&lt;/strong&gt;&lt;br /&gt;Elonnai Hickok (Policy Advocate, Privacy India)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:15- &lt;br /&gt;10:30&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Tea Break&lt;/strong&gt;&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:30-&lt;br /&gt;11:30 &lt;br /&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Panel I: Privacy and Transparency&lt;/strong&gt;&lt;br /&gt;Moderator: Sunil Abraham (Executive Director, Centre for Internet &amp;amp; Society)&lt;br /&gt;Panelists: Prashant Bhushan (Senior Advocate, New Delhi), Simon Davies (Director General, Privacy International, UK), Ponnurangam K (Assistant Prof, IIIT New Delhi), Chitra Ahanthem (Journalist, Imphal), Aruna Roy (Social &amp;amp; Political Activist), Deepak Maheshwari (Director Corporate Affairs, Microsoft)&lt;br /&gt;Poster:Srishti Goyal (Law Student)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:30- &lt;br /&gt;12:30&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Panel II: Privacy and E-Governance Initiatives&lt;/strong&gt;&lt;br /&gt;Moderator: Sudhir Krishnaswamy (Professor, Azim Premji University)&lt;br /&gt;Panelists: Anant Maringanti (Independent Social Researcher), Usha Ramanathan (Advocate&amp;amp;Social Activist), Ram Sewak Sharma (Director General, UIDAI)*, Gus Hosein (Executive Director, Privacy International, UK), R K Singh (Union Home Secretary, New Delhi)*, Apar Gupta (Advocate, Supreme Court of India)&lt;br /&gt;Poster: Adrija Das (Law Student)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:30- &lt;br /&gt;13:30&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lunch&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:30- &lt;br /&gt;14:30&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Panel III: Privacy and National Security&lt;/strong&gt;&lt;br /&gt;Moderator: Justice A P Shah (Former Chief Justice, Delhi High Court)*&lt;br /&gt;Panelists: Menaka Guruswamy (Advocate, Supreme Court, New Delhi), Amol Sharma (Journalist, Wall Street Journal)*, Saikat Datta (Journalist, DNA), Eric King (Human Rights and Technology Advisor, Privacy International, UK), Prasanth Sugathan (Legal Counsel, Software Freedom Law Center) and Oxblood Ruffin&amp;nbsp; (Cult of the Dead Cow Security and Publishing Collective)&lt;br /&gt;Poster: Suchithra Menon (Law Student)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:30- &lt;br /&gt;15:30&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Panel IV: Privacy and Banking&lt;/strong&gt;&lt;br /&gt;Moderator: Prashant Iyengar (Associate Professor, Jindal Law University)&lt;br /&gt;Panelists: M R Umarji (Chief Legal Advisor, IBA), N A Vijayashankar (Cyber Law Expert), Sucheta Dalal (Managing Editor, MoneyLife Magazine)*, Malavika Jayaram (Advocate, Bangalore)&lt;br /&gt;Poster: Malavika Chandu (Law Student)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:30- &lt;br /&gt;15:45&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Tea Break&lt;/strong&gt;&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15:45- &lt;br /&gt;16:45&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Panel V: Privacy and Health&lt;/strong&gt;&lt;br /&gt;Moderator: Ashok Row Kavi (Journalist &amp;amp; LGBT Activist)&lt;br /&gt;Panelists: K K Abraham (President, Indian Network for People with HIV), Shri Sayan Chatterjee (Secretary, National Aids Control Organization)*, Dr V M Katoch (Secretary, Department of Health Research)*, Dr B S Bedi (Advisor, CDAC &amp;amp; Media Lab Asia)&lt;br /&gt;Poster: Danish Sheikh (Alternative Law Forum)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16:45- &lt;br /&gt;17:00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;The Way Forward&lt;/strong&gt;&lt;br /&gt;Elonnai Hickok (Policy Advocate, Privacy India)&lt;br /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;&lt;strong&gt;Bios of Speakers&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Usha Ramanathan&lt;br /&gt;&lt;/h3&gt;
&lt;p&gt;Usha Ramanathan is an internationally recognized expert on the jurisprudence of law, poverty and rights. She writes and speaks on leading issues like the Bhopal gas leak tragedy, mass displacement, civil liberties, criminal law, environment and the judicial process. She is involved in the UID project and has written and debated extensively on it. She is a member of Amnesty International's Advisory Panel on Economic, Social and Cultural Rights and has been called upon by the World Health Organisation as a expert on mental health on various occasions. Her writings can be found at &lt;a class="external-link" href="http://www.ielrc.org/"&gt;http://www.ielrc.org/&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;NA.Vijayashankar&lt;/h3&gt;
&lt;p&gt;NA.Vijayashankar, more popularly known as Naavi, is a Techno Legal Information Security Consultant based in Bangalore, India. Naavi is a pioneer in the field of Cyber Law in India. He is the author of the first book (1999) and first E-Book (2003) on Cyber Laws in India. He has also authored a book titled “Cyber Laws, Corporate Mantra for the Digital Era”, “Cyber Laws Demystified” and “Cyber Laws for Engineers” as well as a book on Cyber Crimes in Kannada.&lt;br /&gt;&lt;br /&gt;Naavi is the founder of &lt;a href="https://cis-india.org/internet-governance/www.cyberlawcollege.com" class="external-link"&gt;www.cyberlawcollege.com&lt;/a&gt; which is the pioneering virtual educational institution in India dedicated to Cyber Law Education. Cyber Law College presently conducts offline and virtual courses on Cyber Laws. It has conducted several courses in association with law colleges in Karnataka such as KLE Law College, Bangalore, JSS Law College, Mysore, SDM law college Mangalore and KLE Law College Hubli.&lt;br /&gt;&lt;br /&gt;Naavi is also the founder of &lt;a href="https://cis-india.org/internet-governance/www.cyberlawcollege.com" class="external-link"&gt;www.naavi.org&lt;/a&gt; the premier Cyber Law Portal in India. Naavi has been engaged in the training of Police in Tamil Nadu and Karnataka and conducts several courses in Cyber Laws for different audiences. He has been a guest faculty in a number of institutions including NPA, IDRBT, DTRI, ISACA, NADT, LBS National Academy, Judicial Academies, NALSAR, etc., as well as several law, engineering and management institutions.&lt;br /&gt;&lt;br /&gt;Naavi has over three decades of senior Corporate executive experience behind him. He has been an ex-Banker and Consultant to several Companies in IT Services. He has conducted hundreds of training sessions to professionals of various disciplines such as bankers, lawyers, chartered accountants, engineers, software professionals, police and judicial officers through workshops and in-house training programmes in cyber laws, cyber crimes, information security and related areas.&lt;/p&gt;
&lt;h3&gt;Chitra Ahanthem&lt;/h3&gt;
&lt;p&gt;Chitra Ahanthem is a features writer with Imphal Free Press, published in Imphal, Manipur. She is also a freelance writer and researcher on issues around HIV/AIDS, child rights, conflict and gender.&lt;/p&gt;
&lt;h3&gt;Baljit Singh Bedi &lt;br /&gt;&lt;/h3&gt;
&lt;p&gt;Baljit Singh Bedi did his B.Tech and M.Tech. from Indian Institute of Technology (IIT), Delhi.&amp;nbsp;&amp;nbsp;&amp;nbsp; After serving for five years in the Centre for Applied Research in Electronics (CARE) IIT, Delhi he joined the Department of Information Technology (DIT), Ministry of Communication &amp;amp; IT (MCIT), Government of India.&amp;nbsp; The major responsibilities and contributions over the years cover conceptualizing, evolving and implementation of a number of major schemes/programmes and projects in the field of electronics and IT applications with primary role in healthcare. He was instrumental in starting an integrated programme in promoting the area of Electronics, IT and Electronic Medical Records (EMR) Standards in Healthcare in India. As the head of Medical Electronics &amp;amp; Telemedicine division, he was looking after the activity of promotion of e-health &amp;amp; tele–health technology and R&amp;amp;D in medical electronics and launched a number of schemes in India. He was part of the National Task Force Telemedicine in India set up by the Ministry of Health &amp;amp; Family Welfare (MoH&amp;amp;FW), Government of India and headed the Group on Standards. He was a Member of National Knowledge Commission’s Working Group on India-Health Information Network Development (I-HIND) and is part of the Advisory Group for follow-up implementation program under the consideration of MoH&amp;amp;FW.&amp;nbsp; He is actively involved in policy, development and deployment programmes of IT in Health initiatives of DIT, MoH&amp;amp;FW, and Media Lab Asia. He is a member of the National Committee set up by MoH&amp;amp;FW for EMR Standardization and Heading its Task Group on Interoperability.&amp;nbsp; He is also International Telecommunication Union (ITU) Expert for e-Health Standardization. He is Executive Member of Indian Association of Medical Informatics (IAMI) and President, Telemedicine Society of India (TSI). At present, he is an Adviser to the Centre for Development of Advanced Computing (CDAC), Scientific Society of MCIT, Government of India.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Deepak Maheshwari &lt;br /&gt;&lt;/h3&gt;
&lt;p&gt;Deepak Maheshwari is Director – Corporate Affairs with Microsoft in India and responsible for interactions with the policymakers &amp;amp; regulators as well as with industry associations &amp;amp; the civil society organizations. An active participant and a keen observer of the interplay between technological innovation and socio-economic development, he has been closely associated with &lt;strong&gt;development &amp;amp; evolution of Information &amp;amp; Communication Technology policy&lt;/strong&gt;, &lt;strong&gt;law &amp;amp; regulation&lt;/strong&gt; for more than a decade and is often invited as a speaker and a contributor of articles &amp;amp; opinions in the media.&lt;br /&gt;&amp;nbsp;&lt;br /&gt;He has been active in several trade associations and served as committee chair &amp;amp; co-chair. He served for two consecutive terms as the elected secretary in the &lt;strong&gt;ISP Association of India&lt;/strong&gt; and co-founded &lt;strong&gt;National Internet eXchange of India (NIXI)&lt;/strong&gt; as well as the &lt;strong&gt;ITU-APT Foundation of India&lt;/strong&gt;. He is also a member on the academic board of the &lt;strong&gt;IIM Ahmedabad- IDEA Telecom Centre of Excellence&lt;/strong&gt;. &lt;br /&gt;&amp;nbsp;&lt;br /&gt;At times mistaken as a lawyer, he was actually awarded degree in engineering by one of India’s leading technical institute&lt;strong&gt; IT-BHU&lt;/strong&gt;. His professional experience of more than 2 decades spans functional responsibilities across sales, marketing, operations and last but not the least, corporate affairs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;*Participants to be confirmed&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href="https://cis-india.org/internet-governance/privacy-symposium.pdf" class="internal-link" title="Symposium"&gt;&lt;/a&gt;&lt;a href="https://cis-india.org/internet-governance/all-india-privacy-symposium.pdf" class="internal-link" title="All India Privacy Symposium"&gt;Download the poster here&lt;/a&gt;&lt;br /&gt;&lt;/li&gt;&lt;li&gt;&lt;a href="https://cis-india.org/internet-governance/privacy-symposium.pdf" class="internal-link" title="Symposium"&gt;Download the agenda here&lt;/a&gt;&lt;strong&gt; &lt;/strong&gt;(PDF, 755 KB)&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;VIDEOS&lt;/strong&gt;&lt;/p&gt;
&lt;iframe src="http://blip.tv/play/AYLs7gcA.html?p=1" frameborder="0" height="250" width="250"&gt;&lt;/iframe&gt;&lt;embed style="display:none" src="http://a.blip.tv/api.swf#AYLs7gcA" type="application/x-shockwave-flash"&gt;&lt;/embed&gt;

&lt;iframe src="http://blip.tv/play/AYLtgXAA.html?p=1" frameborder="0" height="250" width="250"&gt;&lt;/iframe&gt;&lt;embed style="display:none" src="http://a.blip.tv/api.swf#AYLtgXAA" type="application/x-shockwave-flash"&gt;&lt;/embed&gt;

&lt;iframe src="http://blip.tv/play/AYLtgz4A.html?p=1" frameborder="0" height="250" width="250"&gt;&lt;/iframe&gt;&lt;embed style="display:none" src="http://a.blip.tv/api.swf#AYLtgz4A" type="application/x-shockwave-flash"&gt;&lt;/embed&gt;

&lt;iframe src="http://blip.tv/play/AYLtrUIA.html?p=1" frameborder="0" height="250" width="250"&gt;&lt;/iframe&gt;&lt;embed style="display:none" src="http://a.blip.tv/api.swf#AYLtrUIA" type="application/x-shockwave-flash"&gt;&lt;/embed&gt;

&lt;iframe src="http://blip.tv/play/AYLtrl4A.html?p=1" frameborder="0" height="250" width="250"&gt;&lt;/iframe&gt;&lt;embed style="display:none" src="http://a.blip.tv/api.swf#AYLtrl4A" type="application/x-shockwave-flash"&gt;&lt;/embed&gt;


        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/privacy-symposium'&gt;https://cis-india.org/internet-governance/privacy-symposium&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Conference</dc:subject>
    
    
        <dc:subject>Event Type</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2012-02-27T11:08:32Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/economic-times-may-29-mugdha-variyar-alexas-recording-leak-in-us-echoes-privacy-issues-here">
    <title>Alexa’s recording leak in US ‘echoes’ privacy issues here </title>
    <link>https://cis-india.org/internet-governance/news/economic-times-may-29-mugdha-variyar-alexas-recording-leak-in-us-echoes-privacy-issues-here</link>
    <description>
        &lt;b&gt;Market analyst Sanjay Mehta (name changed) has been keeping his Amazon Echo smart speaker mostly unplugged since reports surfaced last week of the device’s voice assistant, Alexa, inadvertently recording and sending out conversations of a family in the US. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Mugdha Variyar was published in the &lt;a class="external-link" href="https://economictimes.indiatimes.com/small-biz/startups/newsbuzz/alexas-recording-leak-in-us-echoes-privacy-issues-here/articleshow/64363491.cms"&gt;Economic Times&lt;/a&gt; on May 29, 2018. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Digital rights activist Nikhil Pahwa keeps his Google Home smart speaker occasionally plugged out, citing the propensity of the device’s voice assistant to assume it is being queried even when it is not. In the Portland case involving Echo, Alexa had misinterpreted a family’s conversation to be a request to record and send the conversation to a person in the family’s contacts list.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, as internet consumers become comfortable using AI-powered voice assistants to play music, set tasks and seek information, they are also waking up to the fragility of data privacy, especially after the infamous Facebook-Cambridge Analytica episode. Indian laws, though, are yet to catch up with technology such as these, say privacy experts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Globally too, governments are grappling with framing policy around data and privacy. That said, the European Union’s tough privacy laws on how companies can handle user data, introduced last week, are forcing companies to seek consent from customers globally to use their data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Singapore-based market research firm Canalys, 108,000 units of Amazon Echo devices were shipped to sales channels in India in the first quarter of this year. As for Google Home, which was launched here in April, 25,000 devices have been shipped so far.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It is always the company’s fault when such incidents (Alexa’s recording leak) happen. But if it does happen in India, it will also be the government’s fault since there is a big vacuum when it comes to protecting privacy in the digital age,” said Sunil Abraham, executive director of Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Abraham said a recording device in homes could open up the possibility of hacking or wiretapping. He, however, added that the Amazon incident would not necessarily create any panic. Amazon did not respond to specific queries about what steps it was taking to ensure such incidents do not occur again.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google said it provides a Home user control through its activity control feature, ability to delete voice-recording history and control permissions to personal data on Gmail, as well as the option to mute the device.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Abraham cited the principles of data minimisation, that is, bare minimum collection of data, and minimal data retention policies with the user, as the main policy requirements, especially to prevent incidents such as the Alexa leak. “We are hopeful that the Srikrishna Committee will include this in the data privacy law,” he added.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While there needs to be a strong law, there also needs to be a strong citizen advocacy, where users take a company to court for privacy breach. Alexa users should also be sending queries to Amazon about what steps they are taking for privacy protection.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/economic-times-may-29-mugdha-variyar-alexas-recording-leak-in-us-echoes-privacy-issues-here'&gt;https://cis-india.org/internet-governance/news/economic-times-may-29-mugdha-variyar-alexas-recording-leak-in-us-echoes-privacy-issues-here&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-30T00:49:26Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda">
    <title>AI in India: A Policy Agenda</title>
    <link>https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-in-india-a-policy-agenda"&gt;Click&lt;/a&gt; to download the file&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;h1 style="text-align: justify; "&gt;Background&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;Over the last few months, the Centre for Internet and Society has been engaged in the mapping of use and impact of artificial intelligence in health, banking, manufacturing, and governance sectors in India through the development of a case study compendium.&lt;a href="#_ftn1" name="_ftnref1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Alongside this research, we are examining the impact of Industry 4.0 on jobs and employment and questions related to the future of work in India. We have also been a part of several global conversations on artificial intelligence and autonomous systems. The Centre for Internet and Society is part of the Partnership on Artificial Intelligence, a consortium which has representation from some of most important companies and civil society organisations involved in developments and research on artificial intelligence. We have contributed to the The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and are also a part of a Big Data for Development Global Network, where we are undertaking research towards evolving ethical principles for use of computational techniques. The following are a set of recommendations we have arrived out of our research into artificial intelligence, particularly the sectoral case studies focussed on the development and use of artificial intelligence in India.&lt;/p&gt;
&lt;h1 style="text-align: justify; "&gt;National AI Strategies: A Brief Global Overview&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;Artificial Intelligence is emerging as  a central policy issue  in several countries. In October 2016, the Obama White House released a report titled, “Preparing for the Future of Artificial Intelligence”&lt;a href="#_ftn2" name="_ftnref2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; delving into a range of issues including application for public goods, regulation, economic impact, global security and fairness issues. The White House also released a companion document called the “National Artificial Intelligence Research and Development Strategic Plan”&lt;a href="#_ftn3" name="_ftnref3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; which laid out a strategic plan for Federally-funded research and development in AI. These were the first of a series of policy documents released by the US towards the role of AI. The United Kingdom announced its 2020 national development strategy and issued a government report to accelerate the application of AI by government agencies while in 2018 the Department for Business, Energy, and Industrial Strategy released the Policy Paper - AI Sector Deal.&lt;a href="#_ftn4" name="_ftnref4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The Japanese government released it paper on Artificial Intelligence Technology Strategy in 2017.&lt;a href="#_ftn5" name="_ftnref5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The European Union launched "SPARC," the world’s largest civilian robotics R&amp;amp;D program, back in 2014.&lt;a href="#_ftn6" name="_ftnref6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Over the last year and a half, Canada,&lt;a href="#_ftn7" name="_ftnref7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; China,&lt;a href="#_ftn8" name="_ftnref8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; the UAE,&lt;a href="#_ftn9" name="_ftnref9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Singapore,&lt;a href="#_ftn10" name="_ftnref10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; South Korea&lt;a href="#_ftn11" name="_ftnref11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;, and France&lt;a href="#_ftn12" name="_ftnref12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; have announced national AI strategy documents while 24 member States in the EU have committed to develop national AI policies that reflect a “European” approach to AI &lt;a href="#_ftn13" name="_ftnref13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;. Other countries such as Mexico and Malaysia are in the process of evolving their national AI strategies. What this suggests is that AI is quickly emerging as central to national plans around the development of science and technology as well as economic and national security and development. There is also a focus on investments enabling AI innovation in critical national domains as a means of addressing key challenges facing nations. India has followed this trend and in 2018 the government published two AI roadmaps - the Report of Task Force on Artificial Intelligence by the AI Task Force constituted by the Ministry of Commerce and Industry&lt;a href="#_ftn14" name="_ftnref14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and the National Strategy for Artificial Intelligence by Niti Aayog.&lt;a href="#_ftn15" name="_ftnref15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Some of the key themes running across the National AI strategies globally are spelt out below.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Economic Impact of AI&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;A common thread that runs across the different national approaches to AI is the belief in the significant economic impact of AI, that it will likely increase productivity and create wealth. The British government estimated that AI could add $814 billion to the UK economy by 2035. The UAE report states that by 2031, AI will help boost the country’s GDP by 35 per cent, reduce government costs by 50 per cent. Similarly, China estimates that the core AI market will be worth 150 billion RMB ($25bn) by 2020, 400 billion RMB ($65bn) and one trillion RMB ($160bn) by 2030. The impact of adoption of AI and automation of labour and employment is also a key theme touched upon across the strategies. For instance, the White House Report of October 2016 states the US workforce is unprepared – and that a serious education programme, through online courses and in-house schemes, will be required.&lt;a href="#_ftn16" name="_ftnref16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;State Funding&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Another key trend exhibited in all national strategies towards AI has been a commitment by the respective governments towards supporting research and development in AI. The French government has stated that it intends to invest €1.5 billion ($1.85 billion) in AI research in the period through to 2022. The British government’s recommendations, in late 2017, were followed swiftly by a promise in the autumn budget of new funds, including at least £75 million for AI. Similarly, the the Canadian government put together a $125-million ‘pan-Canadian AI strategy’ last year.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;AI for Public Good&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;The use of AI for Public Good is a significant focus of most AI policies. The biggest justification for AI innovation as a legitimate objective of public policy is its promised impact towards improvement of  people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies, and emerge as a transformative technology, much like mobile computing. These public good uses of AI are emerging across sectors such as transportation, migration, law enforcement and justice system, education, and agriculture..&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;National Institutions leading AI research&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Another important trend which was  key to the implementation of national AI strategies is the creation or development of well-funded centres of excellence which would serve as drivers of research and development and leverage synergies with the private sector. The French Institute for Research in Computer Science and Automation (INRIA) plans to create a national AI research program with five industrial partners. In UK, The Alan Turing Institute is likely to emerge as the national institute for data science, and an AI Council would be set up to manage inter-sector initiatives and training. In Canada, Canadian Institute for Advanced Research (CIFAR) has been tasked with implementing their AI strategy. Countries like Japan has a less centralised structure with the creation of strategic council for AI technology’ to promote research and development in the field, and manage a number of key academic institutions, including NEDO and its national ICT (NICT) and science and tech (JST) agencies. These institutions are key to successful implementation of national agendas and policies around AI.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;AI, Ethics and Regulation&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Across the AI strategies — ethical dimensions and regulation of AI were highlighted as concerns that needed to be addressed. Algorithmic transparency and explainability, clarity on liability, accountability and oversight, bias and discrimination, and privacy are ethical  and regulatory questions that have been raised. Employment and the future of work is another area of focus that has been identified by countries.  For example, the US 2016 Report reflected on if existing regulation is adequate to address risk or if adaption is needed by examining the use of AI in automated vehicles. In the policy paper - AI Sector Deal - the UK proposes four grand challenges: AI and Data Economy, Future Mobility, Clean Growth, and Ageing Society. The Pan Canadian Artificial Intelligence Strategy focuses on developing global thought leadership on the economic, ethical, policy, and legal implications of advances in artificial intelligence.&lt;a href="#_ftn17" name="_ftnref17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The above are important factors and trends to take into account and to different extents have been reflected in the two national roadmaps for AI. Without adequate institutional planning, there is a risk of national strategies being too monolithic in nature.  Without sufficient supporting mechanisms in the form of national institutions which would drive the AI research and innovation, capacity building and re-skilling of workforce to adapt to changing technological trends, building regulatory capacity to address new and emerging issues which may disrupt traditional forms of regulation and finally, creation of an environment of monetary support both from the public and private sector it becomes difficult to implement a national strategy and actualize the potentials of AI . As stated above, there is also a need for identification of key national policy problems which can be addressed by the use of AI, and the creation of a framework with institutional actors to articulate the appropriate plan of action to address the problems using AI. There are several ongoing global initiatives which are in the process of trying to articulate key principles for ethical AI. These discussions also feature in some of the national strategy documents.&lt;/p&gt;
&lt;h1 style="text-align: justify; "&gt;Key considerations for AI policymaking in India&lt;/h1&gt;
&lt;p style="text-align: justify; "&gt;As mentioned above, India has published two national AI strategies. We have responded to both of these here&lt;a href="#_ftn18" name="_ftnref18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and here.&lt;a href="#_ftn19" name="_ftnref19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Beyond these two roadmaps, this policy brief reflects on a number of factors that need to come together for India to leverage and adopt AI across sectors, communities, and technologies successfully.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Resources, Infrastructure, Markets, and Funding&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Ensure adequate government funding and investment in R&amp;amp;D&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;As mentioned above, a survey of all major national strategies on AI reveals a significant financial commitment from governments towards research and development surrounding AI. Most strategy documents speak of the need to safeguard national ambitions in the race for AI development. In order to do so it is imperative to have a national strategy for AI research and development, identification of nodal agencies to enable the process, and creation of institutional capacity to carry out cutting edge research.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most jurisdictions such as Japan, UK and China have discussed collaborations between the industry and government to ensure greater investment into AI research and development. The European Union has spoken using the existing public-private partnerships, particularly in robotics and big data to boost investment by over one and half times.&lt;a href="#_ftn20" name="_ftnref20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; To some extent, this  step has been initiated by the Niti Aayog strategy paper. The paper lists out enabling factors for the widespread adoption of AI and maps out specific government agencies and ministries that could promote such growth. In February 2018, the Ministry of Electronics and IT also set up four committees to prepare a roadmap for a national AI programme. The four committees are presently studying AI in context of citizen centric services; data platforms; skilling, reskilling and R&amp;amp;D; and legal, regulatory and cybersecurity perspectives.&lt;a href="#_ftn21" name="_ftnref21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Democratize AI technologies and data&lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Clean, accurate, and appropriately curated data is essential for training algorithms. Importantly, large quantities of data alone does not translate into better results. Accuracy and curation of data should be prerequisites to quantity of data. Frameworks to generate and access larger quantity of data should not hinge on models of centralized data stores. The government and the private sector are generally gatekeepers to vast amounts of data and technologies. Ryan Calo has called this an issue of data parity,&lt;a href="#_ftn22" name="_ftnref22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; where only a few well established leaders in the field have the ability to acquire data and build datasets. Gaining access to data comes with its own questions of ownership, privacy, security, accuracy, and completeness. There are a number of different approaches and techniques that can be adopted to enable access to data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Open Government Data &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Robust open data sets is one way in which access can be enabled. Open data is particularly important for small start-ups as they build prototypes. Even though India is a data dense country and has in place a National Data and Accessibility Policy India does not yet have robust and comprehensive open data sets across sectors and fields.  Our research found that this is standing as an obstacle to innovation in the Indian context as startups often turn to open datasets in the US and Europe for developing prototypes. Yet, this is problematic because the demography represented in the data set is significantly different resulting in the development of solutions that are trained to a specific demographic, and thus need to be re-trained on Indian data. Although AI is technology agnostic, in the cases of different use cases of data analysis, demographically different training data is not ideal. This is particularly true for certain categories such as health, employment, and financial data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government can play a key role in providing access to datasets that will help the functioning and performance of AI technologies. The Indian government has already made a move towards accessible datasets through the Open Government Data Platform which provides access to a range of data collected by various ministries. Telangana has developed its own Open Data Policy which has stood out for its transparency and the quality of data collected and helps build AI based solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In order to encourage and facilitate innovation, the central and state governments need to actively pursue and implement the National Data and Accessibility Policy.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Access to Private Sector Data &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The private sector is the gatekeeper to large amounts of data. There is a need to explore different models of enabling access to private sector data while ensuring and protecting users rights and company IP. This data is often considered as a company asset and not shared with other stakeholders. Yet, this data is essential in enabling innovation in AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amanda Levendowski states that ML practitioners have essentially three options in securing sufficient data— build the databases themselves, buy the data, or use data in the public domain. The first two alternatives are largely available to big firms or institutions. Smaller firms often end resorting to the third option but it carries greater risks of bias.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A solution could be federated access, with companies allowing access to researchers and developers to encrypted data without sharing the actual data.  Another solution that has been proposed is ‘watermarking’ data sets.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sandboxes have been promoted as tools for enabling innovation while protecting privacy, security etc. Data sandboxes allow companies access to large anonymized data sets under controlled circumstances. A regulatory sandbox is a controlled environment with relaxed regulations that allow the product to be tested thoroughly before it is launched to the public. By providing certification and safe spaces for testing, the government will encourage innovation in this sphere. This system has already been adopted in Japan where there are AI specific regulatory sandboxes to drive society 5.0.160 data sandboxes are tools that can be considered within specific sectors to enable innovation. A sector wide data sandbox was also contemplated by TRAI.&lt;a href="#_ftn23" name="_ftnref23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; A sector specific governance structure can establish a system of ethical reviews of underlying data used to feed the AI technology along with data collected in order to ensure that this data is complete, accurate and has integrity. A similar system has been developed by Statistics Norway and the Norwegian Centre for Research Data.&lt;a href="#_ftn24" name="_ftnref24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Marketplaces&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The National Roadmap for Artificial Intelligence by NITI Aayog proposes the creation of a National AI marketplace that is comprised of a data marketplace, data annotation marketplace, and deployable model marketplace/solutions marketplace.&lt;a href="#_ftn25" name="_ftnref25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; In particular, it is envisioned that the data marketplace would be based on blockchain technology and have the features of: traceability, access controls, compliance with local and international regulations, and robust price discovery mechanism for data. Other questions that will need to be answered center around pricing and ensuring equal access. It will also be interesting how the government incentivises the provision of data by private sector companies. Most data marketplaces that are emerging are initiated by the private sector.&lt;a href="#_ftn26" name="_ftnref26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; A government initiated marketplace has the potential to bring parity to some of the questions raised above, but it should be strictly limited to private sector data in order to not replace open government data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Open Source Technology &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;A number of companies are now offering open source AI technologies. For example, TensorFlow, Keras, Scikit-learn, Microsoft Cognitive Toolkit, Theano, Caffe, Torch, and Accord.NET.&lt;a href="#_ftn27" name="_ftnref27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The government should incentivise and promote open source AI technologies towards harnessing and accelerating research in AI.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Re-thinking Intellectual Property Regimes &lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Going forward it will be important for the government to develop an intellectual property framework that encourages innovation. AI systems are trained by reading, viewing, and listening to copies of human-created works. These resources such as books, articles, photographs, films, videos, and audio recordings are all key subjects of copyright protection. Copyright law grants exclusive rights to copyright owners, including the right to reproduce their works in copies, and one who violates one of those exclusive rights “is an infringer of copyright.&lt;a href="#_ftn28" name="_ftnref28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The enterprise of AI is, to this extent, designed to conflict with tenets of copyright law, and after the attempted ‘democratization’ of copyrighted content by the advent of the Internet, AI poses the latest challenge to copyright law. At the centre of this challenge is the fact that it remains an open question whether a copy made to train AI is a “copy” under copyright law, and consequently whether such a copy is an infringement.&lt;a href="#_ftn29" name="_ftnref29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The fractured jurisprudence on copyright law is likely to pose interesting legal questions with newer use cases of AI. For instance, Google has developed a technique called federated learning, popularly referred to as on-device ML, in which training data is localised to the originating mobile device rather than copying data to a centralized server.&lt;a href="#_ftn30" name="_ftnref30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; The key copyright questions here is whether decentralized training data stored in random access memory (RAM) would be considered as “copies”.&lt;a href="#_ftn31" name="_ftnref31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; There are also suggestions that copies made for the purpose of training of machine learning systems may be so trivial or de minimis that they may not qualify as infringement.&lt;a href="#_ftn32" name="_ftnref32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; For any industry to flourish, there needs to be legal and regulatory clarity and it is imperative that these copyright questions emerging out of use of AI be addressed soon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As noted in our response to the Niti Aayog national AI strategy  “&lt;i&gt;The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.”&lt;a href="#_ftn33" name="_ftnref33"&gt;&lt;sup&gt;&lt;b&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/b&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/i&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;National infrastructure to support domestic development &lt;/b&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Building a robust national Artificial Intelligence solution requires establishing adequate indigenous  infrastructural capacity for data storage and processing.  While this should not necessarily extend to mandating data localisation as the draft privacy bill has done, capacity should be developed to store data sets generated by indigenous nodal points.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Data Storage &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Capacity needs to increase as the volume of data that needs to be processed in India increases. This includes ensuring effective storage capacity, IOPS (Input/Output per second) and ability to process massive amounts of data.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;AI Networking Infrastructure&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Organizations will need to upgrade their networks in a bid to upgrade and optimize efficiencies of scale. Scalability must be undertaken on a high priority which will require a high-bandwidth, low latency and creative architecture, which requires appropriate last mile data curation enforcement.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Conceptualization and Implementation&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Awareness, Education, and Reskilling &lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Encouraging AI research&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;This can be achieved by collaborations between the government and large companies to promote accessibility and encourage innovation through greater R&amp;amp;D spending. The Government of Karnataka, for instance, is collaborating with NASSCOM to set up a Centre of Excellence for Data Science and Artificial Intelligence (CoE-DS&amp;amp;AI) on a public-private partnership model to “accelerate the ecosystem in Karnataka by providing the impetus for the development of data science and artificial intelligence across the country.” Similar centres could be incubated in hospitals and medical colleges in India.  Principles of public funded research such as FOSS, open standards, and open data should be core to government initiatives to encourage research.  The Niti Aaayog report proposes a two tier integrated approach towards accelerating research, but is currently silent on these principles.&lt;a href="#_ftn34" name="_ftnref34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore,as suggested by the NITI AAYOG Report, the government needs to set up ‘centres of excellence’. Building upon the stakeholders identified in the NITI AAYOG Report, the centers of excellence should  involve a wide range of experts including lawyers, political philosophers, software developers, sociologists and gender studies from diverse organizations including government, civil society,the private sector and research institutions  to ensure the fair and efficient roll out of the technology.&lt;a href="#_ftn35" name="_ftnref35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; An example is the Leverhulme Centre for the Future of Intelligence set up by the Leverhulme Foundation at the University of Cambridge&lt;a href="#_ftn36" name="_ftnref36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and the AI Now Institute at New York University (NYU)&lt;a href="#_ftn37" name="_ftnref37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; These research centres bring together a wide range of experts from all over the globe.&lt;a href="#_ftn38" name="_ftnref38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Skill sets to successfully adopt AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Educational institutions should provide opportunities for students to skill themselves to adapt to adoption of AI, and also push for academic programmes around AI. It is also important to introduce computing technologies such as AI in medical schools in order to equip doctors to adopt the technical skill sets and ethics required to use integrate AI in their practices. Similarly, IT institutes could include courses on ethics, privacy, accountability etc. to equip engineers and developers with an understanding of the questions surrounding the technology and services they are developing.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Societal Awareness Building&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Much of the discussion around skilling for AI is in the context of the workplace, but there is a need for awareness to be developed across society for a broader adaptation to AI. The Niti Aayog report takes the first steps towards this - noting the importance of highlighting the benefits of AI to the public. The conversation needs to go beyond this towards enabling individuals to recognize and adapt to changes that might be brought about - directly and indirectly - by AI - inside and outside of the workplace. This could include catalyzing a shift in mindset to life long learning and discussion around potential implications of human-machine interactions.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Early Childhood Awareness and Education &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;It is important that awareness around AI begins in early childhood. This is  in part because children already interact with AI and increasingly will do so and thus awareness is needed in how AI works and can be safely and ethically used. It is also important to start building the skills that will be necessary in an AI driven society from a young age.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Focus on marginalised groups &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Awareness, skills, and education should be targeted at national minorities including rural communities, the disabled, and women. Further, there should be a concerted  focus on communities that are under-represented in the tech sector-such as women and sexual minorities-to ensure that the algorithms themselves and the community working on AI driven solutions are holistic and cohesive. For example, Iridescent focuses on girls, children, and families to enable them to adapt to changes like artificial intelligence through promoting curiosity, creativity, and perseverance to become lifelong learners.&lt;a href="#_ftn39" name="_ftnref39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; This will be important towards ensuring that AI does not deepen societal  and global inequalities including digital divides. Widespread use of AI will undoubtedly require re-skilling various stakeholders in order to make them aware of the prospects of AI.&lt;a href="#_ftn40" name="_ftnref40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Artificial Intelligence itself can be used as a resource in the re-skilling process itself-as it would be used in the education sector to gauge people’s comfort with the technology and plug necessary gaps.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Improved access to and awareness of Internet of Things&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The development of smart content or Intelligent Tutoring Systems in the education can only be done on a large scale if both the teacher and the student has access to and feel comfortable with using basic IoT devices . A U.K. government report has suggested that any skilled workforce  using AI should be a mix of those with a basic understanding responsible for implementation at the grassroots level , more informed users and specialists with advanced development and implementation skills.&lt;a href="#_ftn41" name="_ftnref41"&gt;&lt;sup&gt;&lt;sup&gt;[41]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;The same logic applies to the agriculture sector, where the government is looking to develop smart weather-pattern tracking applications. A potential short-term solution may lie in ensuring that key actors have access to an  IoT device so that he/she may access digital and then impart the benefits of access to proximate individuals. In the education sector, this would involve ensuring that all teachers have access to and are competent in using an IoT device. In the agricultural sector, this may involve equipping each village with a set of IoT devices so that the information can be shared among concerned individuals. Such an approach recognizes that AI is not the only technology catalyzing change - for example industry 4.0 is understood as  comprising of a suite of technologies including but not limited to AI.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Public Discourse&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;As solutions bring together and process vast amounts of granular data, this data can be from a variety of public and private sources - from third party sources or generated by the AI and its interaction with its environment. This means that very granular and non traditional data points are now going into decision making processes. Public discussion is needed to understand social and cultural norms and standards and how these might translate into acceptable use norms for data in various sectors.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Coordination and collaboration across stakeholders &lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Development of Contextually Nuanced and Appropriate AI Solutions &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Towards ensuring effectiveness and  accuracy it is important that solutions used in India are developed to account for cultural nuances and diversity. From our research this could be done in a number of ways ranging from: training AI solutions used in health on data from Indian patients to account for differences in demographics&lt;a href="#_ftn42" name="_ftnref42"&gt;&lt;sup&gt;&lt;sup&gt;[42]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;,  focussing on  natural language voice recognition to account for the diversity in languages and digital skills in the Indian context,&lt;a href="#_ftn43" name="_ftnref43"&gt;&lt;sup&gt;&lt;sup&gt;[43]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and developing and applying AI to reflect societal norms and understandings.&lt;a href="#_ftn44" name="_ftnref44"&gt;&lt;sup&gt;&lt;sup&gt;[44]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Continuing, deepening, and expanding  partnerships for innovation&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Continued innovation while holistically accounting for the challenges that AI poses  will be key for actors in the different sectors to remain competitive. As noted across case study reports partnerships is key in  facilitating this innovation and filling capacity gaps. These partnerships can be across sectors, institutions, domains, geographies, and stakeholder groups. For example:  finance/ telecom, public/private, national/international, ethics/software development/law, and academia/civil society/industry/government.  We would emphasize collaboration between actors across different domains and stakeholder groups as developing holistics AI solutions demands multiple understandings and perspectives.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Coordinated Implementation&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Key sectors in India need to  begin to take steps to consider sector wide coordination in implementing AI. Potential stress and system wide vulnerabilities would need to be considered when undertaking this. Sectoral regulators such as RBI, TRAI, and the Medical Council of India are ideally placed to lead this coordination.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Develop contextual standard benchmarks to assess quality of algorithms&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;In part because of the nacency of the development and implementation of AI,  towards enabling effective assessments of algorithms to understand impact and informing selection by institutions adopting solutions, standard benchmarks can help in assessing quality and appropriateness of algorithms. It may be most effective to define such benchmarks at a sectoral level (finance etc.) or by technology and solution (facial recognition etc.).  Ideally, these efforts would be led by the government in collaboration with multiple stakeholders.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Developing a framework for working with the private sector for use-cases by the government&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;There are various potential use cases the government could adopt in order to use AI as a tool for augmenting public service delivery  in India by the government. However, given lack of capacity -both human resource and technological-means that entering into partnerships with the private sector may enable more fruitful harnessing of AI- as has been seen with existing MOUs in the agricultural&lt;a href="#_ftn45" name="_ftnref45"&gt;&lt;sup&gt;&lt;sup&gt;[45]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and healthcare sectors.&lt;a href="#_ftn46" name="_ftnref46"&gt;&lt;sup&gt;&lt;sup&gt;[46]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, the partnership must be used as a means to build capacity within the various nodes in the set-up rather than relying  only on  the private sector partner to continue delivering sustainable solutions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Particularly, in the case of use of AI for governance, there is a need to evolve a clear parameter to do impact assessment prior to the deployment of the technology that clearly tries to map estimated impact of the technology of clearly defined objectives, which must also include the due process, procedural fairness and human rights considerations . As per Article 12 of the Indian Constitution, whenever the government is exercising a public function, it is bound by the entire gamut of fundamental rights articulated in Part III of the Constitution. This is a crucial consideration the government will have to bear in mind whenever it uses AI-regardless of the sector.  In all cases of public service delivery, primary accountability for the use of AI should lie with the government itself, which means that a cohesive and uniform framework which regulates these partnerships must be conceptualised. This framework should incorporate : (a) Uniformity in the wording and content of contracts that the government signs, (b) Imposition of obligations of transparency and accountability on the developer to ensure that the solutions developed are in conjunction with constitutional standards and (c) Continuous evaluation of private sector developers by the government and experts to ensure that they are complying with their obligations.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Defining Safety Critical AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The implications of AI differs according to use. Some countries, such as the EU, are beginning to define sectors where AI should play the role of augmenting jobs as opposed to functioning autonomously. The Global Partnership on AI is has termed sectors where AI tools supplement or replace human decision making in areas such as health and transportation as ‘safety critical AI’ and is  researching best practices for application of AI in these areas.  India will need to think through if there is a threshold that needs to be set and more stringent regulation applied. In addition to uses in health and transportation, defense and law enforcement would be another sector where certain use would require more stringent regulation.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Appropriate certification mechanisms&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Appropriate certificate mechanisms will be important in ensuring the quality of AI solutions.   A significant barrier to the adoption of AI  in some sectors  in India is acceptability of results, which include direct results arrived at using AI technologies as well as opinions provided by practitioners that are influenced/aided by AI technologies. For instance, start-ups in the healthcare sectors often find that they are asked to show proof of a clinical trial when presenting their products to doctors and hospitals, yet clinical trials are expensive, time consuming and inappropriate forms of certification for medical devices and digital health platforms. Startups also face difficulty in conducting clinical trials as there is lack of a clear regulation to adhere to. They believe that while clinical trials are a necessity with respect to drugs, the process often results in obsolescence of the technology by the time it is approved in the context of AI. Yet, medical practitioners are less trusting towards startups who do not have approval from a national or international authority. A possible and partial solution suggested by these startups is to enable doctors to partner with them to conduct clinical trials together. However, such partnerships cannot be at the expense of rigour, and adequate protections need to be built in the enabling regulation.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Serving as a voice for emerging economies in the global debate on AI&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;While India should utilise Artificial Intelligence in the economy as a means of occupying a driving role in the global debate around AI, it must be cautious before allowing the use of Indian territory and infrastructure as a test bed for other emerging economies without considering the ramifications that the utilisation of AI may have for Indian citizens. The NITI AAYOG Report envisions  India as leverage AI as a ‘garage’ for emerging economies.&lt;a href="#_ftn47" name="_ftnref47"&gt;&lt;sup&gt;&lt;sup&gt;[47]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While there are certain positive connotations of this suggestion in so far as this propels India to occupy a leadership position-both technically and normatively in determining future use cases for AI in India,, in order to ensure that Indian citizens are not used as test subjects in this process, guiding principles could be developed such as requiring that projects have clear benefits for India.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Frameworks for Regulation&lt;/h2&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;National legislation&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Data Protection Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;India is a data-dense country, and the lack of a robust privacy  regime, allows the public and private sector easier access to large amounts of data than might be found in other contexts with stringent privacy laws. India also lacks a formal regulatory regime around anonymization. In our research we found that this gap does not always translate into a gap in practice, as some start up companies have  adopted  self-regulatory practices towards protecting privacy such as of anonymising data they receive before using it further, but it does result in unclear and unharmonized practice..&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In order to ensure rights and address emerging challenges to the same posed by artificial intelligence, India needs to enact   a comprehensive privacy legislation applicable to the private and public sector to regulate the use of data, including use in artificial intelligence. A privacy legislation will also have to address more complicated questions such as the use of publicly available data for training algorithms, how traditional data categories (PI vs. SPDI - meta data vs. content data etc.) need to be revisited in light of AI,  and how can a privacy legislation be applied to autonomous decision making. Similarly, surveillance laws may need to be revisited in light of AI driven technologies such as facial recognition, UAS, and self driving cars as they provide new means of surveillance to the state and have potential implications for other rights such as the right to freedom of expression and the right to assembly.  Sectoral protections can compliment and build upon the baseline protections articulated in a national privacy legislation.&lt;a href="#_ftn48" name="_ftnref48"&gt;&lt;sup&gt;&lt;sup&gt;[48]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; In August 2018 the Srikrishna Committee released a draft data protection bill for India. We have reflected on how the Bill addresses AI. Though the Bill brings under its scope companies deploying emerging technologies and subjects them to the principles of privacy by design and data impact assessments, the Bill is silent on key rights and responsibilities, namely the responsibility of the data controller to explain the logic and impact of automated decision making including profiling to data subjects and the right to opt out of automated decision making in defined circumstances.&lt;a href="#_ftn49" name="_ftnref49"&gt;&lt;sup&gt;&lt;sup&gt;[49]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Further, the development of technological solutions to address the dilemma between AI and the need for access to larger quantities of data for multiple purposes and privacy should be emphasized.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Discrimination Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;A growing area of research globally is the social consequences of AI with a particular focus on its tendency to replicate or amplify existing and structural inequalities. Problems such as data invisibility of certain excluded groups,&lt;a href="#_ftn50" name="_ftnref50"&gt;&lt;sup&gt;&lt;sup&gt;[50]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; the myth of data objectivity and neutrality,&lt;a href="#_ftn51" name="_ftnref51"&gt;&lt;sup&gt;&lt;sup&gt;[51]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and data monopolization&lt;a href="#_ftn52" name="_ftnref52"&gt;&lt;sup&gt;&lt;sup&gt;[52]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; contribute to the disparate impacts of big data and AI. So far much of the research on this subject has not moved beyond the exploratory phase as is reflected in the reports released by the White House&lt;a href="#_ftn53" name="_ftnref53"&gt;&lt;sup&gt;&lt;sup&gt;[53]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and Federal Trade Commission&lt;a href="#_ftn54" name="_ftnref54"&gt;&lt;sup&gt;&lt;sup&gt;[54]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; in the United States. The biggest challenge in addressing discriminatory and disparate impacts of AI is ascertaining “where value-added personalization and segmentation ends and where harmful discrimination begins.”&lt;a href="#_ftn55" name="_ftnref55"&gt;&lt;sup&gt;&lt;sup&gt;[55]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some prominent cases where AI can have discriminatory impact are denial of loans based on attributes such as neighbourhood of residence as a proxies which can be used to circumvent anti-discrimination laws which prevent adverse determination on the grounds of race, religion, caste or gender, or adverse findings by predictive policing against persons who are unfavorably represented in the structurally biased datasets used by the law enforcement agencies. There is a dire need for disparate impact regulation in sectors which see the emerging use of AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Similar to disparate impact regulation, developments in AI, and its utilisation, especially in credit rating, or risk assessment processes could create complex problems that cannot be solved only by the principle based regulation. Instead, regulation intended specifically to avoid outcomes that the regulators feel are completely against the consumer, could be an additional tool that increases the fairness, and effectiveness of the system.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Competition Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The conversation of use of competition or antitrust laws to govern AI is still at an early stage. However, the emergence of numerous data driven mergers or acquisitions such as Yahoo-Verizon, Microsoft-LinkedIn and Facebook-WhatsApp have made it difficult to ignore the potential role of competition law in the governance of data collection and processing practices. It is important to note that the impact of Big Data goes far beyond digital markets and the mergers of companies such as Bayer, Climate Corp and Monsanto shows that data driven business models can also lead to the convergence of companies from completely different sectors as well. So far, courts in Europe have looked at questions such as the impact of combination of databases on competition&lt;a href="#_ftn56" name="_ftnref56"&gt;&lt;sup&gt;&lt;sup&gt;[56]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; and have held that in the context of merger control, data can be a relevant question if an undertaking achieves a dominant position through a merger, making it capable of gaining further market power through increased amounts of customer data. The evaluation of the market advantages of specific datasets has already been done in the past, and factors which have been deemed to be relevant have included whether the dataset could be replicated under reasonable conditions by competitors and whether the use of the dataset was likely to result in a significant competitive advantage.&lt;a href="#_ftn57" name="_ftnref57"&gt;&lt;sup&gt;&lt;sup&gt;[57]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; However, there are limited circumstances in which big data meets the four traditional criteria for being a barrier to entry or a source of sustainable competitive advantage — inimitability, rarity, value, and non-substitutability.&lt;a href="#_ftn58" name="_ftnref58"&gt;&lt;sup&gt;&lt;sup&gt;[58]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Any use of competition law to curb data-exclusionary or data-exploitative practices will first have to meet the threshold of establishing capacity for a firm to derive market power from its ability to sustain datasets unavailable to its competitors. In this context the peculiar ways in which network effects, multi-homing practices and how dynamic the digital markets are, are all relevant factors which could have both positive and negative impacts on competition. There is a need for greater discussion on data as a sources of market power in both digital and non-digital markets, and how this legal position can used to curb data monopolies, especially in light of government backed monopolies for identity verification and payments in India.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Consumer Protection Law&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session has introduced an expansive definition of the term “unfair trade practices.” The definition as per the Bill includes the disclosure “to any other person any personal information given in confidence by the consumer.” This clause excludes from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Alongside, there is also a need to ensure that principles such as safeguarding consumers personal information in order to ensure that the same is not used to their detriment are included within the definition of unfair trade practices. This would provide consumers an efficient and relatively speedy forum to contest adverse impacts on them of data driven decision-making.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Sectoral Regulation &lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;Our research into sectoral case studies revealed that there are a number of existing sectoral laws and policies that are applicable to aspects of AI. For example, in the health sector there is the Medical Council Professional Conduct, Etiquette, and Ethics Regulations 2002, the Electronic Health Records Standards 2016, the draft Medical Devices Rules 2017, the draft Digital Information Security in Healthcare Act.  In the finance sector there is the Credit Information Companies (Regulation) Act 2005 and 2006, the Securities and Exchange Board of India (Investment Advisers) Regulations, 2013, the Payment and Settlement Systems Act, 2007, the Banking Regulations Act 1949, SEBI guidelines on robo advisors etc. Before new regulations, guidelines etc are developed - a comprehensive exercise needs to be undertaken at a sectoral level to understand if 1. sectoral policy adequately addresses the changes being brought about by AI 2. If it does not - is an amendment possible and if not - what form of policy would fill the gap.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Principled approach&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Transparency&lt;/b&gt;&lt;/h4&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Audits&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Internal and external audits can be mechanisms towards creating transparency about the processes and results of AI solutions as they are implemented in a specific context. Audits can take place while a solution is still in ‘pilot’ mode and on a regular basis during implementation. For example,  in the Payment Card Industry (PCI) tool,  transparency is achieved through frequent audits, the results of which are simultaneously and instantly transmitted to the regulator and the developer. Ideally parts of the results of the audit are also made available to the public, even if the entire results are not shared.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Tiered Levels of Transparency&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;There are different levels and forms of transparency as well as different ways of achieving the same. The type and form of transparency can be tiered and dependent on factors such as criticality of function, potential direct and indirect harm, sensitivity of data involved, actor using the solution . The audience can also be tiered and could range from an individual user to senior level positions, to oversight bodies.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Human Facing Transparency&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;It will be important for India to define standards around human-machine interaction including the level of transparency that will be required. Will chatbots need to disclose that they are chatbots? Will a notice need to be posted that facial recognition technology is used in a CCTV camera? Will a company need to disclose in terms of service and privacy policies that data is processed via an AI driven solution? Will there be a distinction if the AI takes the decision autonomously vs. if the AI played an augmenting role? Presently, the Niti Aayog paper has been silent on this question.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Explainability&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;An explanation is not equivalent to complete  transparency. The obligation of providing an explanation does not mean  that the developer should necessarily  know the flow of bits through the AI system. Instead, the legal requirement of providing an explanation requires an ability to explain how certain parameters may be utilised to arrive at an outcome in a certain situation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Doshi-Velez and Kortz have highlighted two technical ideas that may enhance a developer's ability to explain the functioning of AI systems:&lt;a href="#_ftn59" name="_ftnref59"&gt;&lt;sup&gt;&lt;sup&gt;[59]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;1) Differentiation and processing: AI systems are designed to have the inputs differentiated and processed through various forms of computation-in a reproducible and robust manner. Therefore, developers should be able to explain a particular decision by examining the inputs in an attempt to determine which of them have the greatest impact on the outcome.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;2) Counterfactual faithfulness: The second property of counterfactual faithfulness enables the developer to consider which factors caused a difference in the outcomes. Both these solutions can be deployed without necessarily knowing the contents of black boxes. As per Pasquale, ‘Explainability matters because the process of reason-giving is intrinsic to juridical determinations – not simply one modular characteristic jettisoned as anachronistic once automated prediction is sufficiently advanced.”&lt;a href="#_ftn60" name="_ftnref60"&gt;&lt;sup&gt;&lt;sup&gt;[60]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Rules based system applied contextually&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Oswald et al have suggested two proposals that might  mitigate algorithmic opacity.by designing a broad rules-based system, whose implementation need to be applied in a context-specific manner which thoroughly evaluates the key enablers and challengers in each specific use case.&lt;a href="#_ftn61" name="_ftnref61"&gt;&lt;sup&gt;&lt;sup&gt;[61]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;Experimental proportionality was designed to enable the courts to make proportionality determinations of an algorithm at the experimental stage even before the impacts are fully realised in a manner that would enable them to ensure that appropriate metrics for performance evaluation and cohesive principles of design have been adopted. In such cases they recommend that the courts give the benefit of the doubt to the public sector body subject to another hearing within a stipulated period of time once data on the impacts of the algorithm become more readily available.&lt;/li&gt;
&lt;li&gt;‘ALGO-CARE' calls for the design of a rules-based system which ensures that the algorithms&lt;a href="#_ftn62" name="_ftnref62"&gt;&lt;sup&gt;&lt;sup&gt;[62]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; are:&lt;/li&gt;
&lt;/ul&gt;
&lt;p style="text-align: justify; "&gt;(1) Advisory: Algorithms must retain an advisory capacity that augments existing human capability rather than replacing human discretion outright;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(2) Lawful: Algorithm's proposed function, application, individual effect and use of datasets should be considered in  symbiosis with necessity, proportionality and data minimisation principles;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(3) Granularity: Issues such as data analysis issues such as meaning of data, challenges stemming from disparate tracts of data, omitted data and inferences  should be key points in the implementation process;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(4) Ownership: Due regard should be given to intellectual property ownership but in the case of algorithms used for governance, it may be better to have open source algorithms at the default.  Regardless of the sector,the developer must ensure that the algorithm works in a manner that enables a third party to investigate the workings of the algorithm in an adversarial judicial context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(5)Challengeable:The results of algorithmic analysis should be applied with regard to professional codes and regulations and be challengeable. In a report evaluating the NITI AAYOG  Discussion Paper, CIS has argued that AI that is used for governance , must be made auditable in the public domain,if not under Free and Open Source Software (FOSS)-particularly in the case of AI that has implications for fundamental rights.&lt;a href="#_ftn63" name="_ftnref63"&gt;&lt;sup&gt;&lt;sup&gt;[63]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(6) Accuracy: The design of the algorithm should check for accuracy;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(7) Responsible: Should consider a wider set of ethical and moral principles and the foundations of human rights as a guarantor of human dignity at all levels and&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;(8) Explainable: Machine Learning should be interpretable and accountable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A rules based system like ALGO-CARE can enable predictability in use frameworks for AI. Predictability compliments and strengthens  transparency.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Accountability&lt;/b&gt;&lt;/h4&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Conduct Impact Assessment&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;There is a need to evolve Algorithmic Impact Assessment frameworks for the different sectors in India, which should address issues of bias, unfairness and other harmful impacts of use of automated decision making. AI is a nascent field and the impact of the technology on the economy, society, etc. is still yet to be fully understood. Impact assessment standards will be important in identifying and addressing potential or existing harms and could potentially be more important in sectors or uses where there is direct human interaction with AI or power dimensions - such as in healthcare or use by the government. A 2018 Report by the AI Now Institute lists methods that should be adopted by the government for conducting his holistic assessment&lt;a href="#_ftn64" name="_ftnref64"&gt;&lt;sup&gt;&lt;sup&gt;[64]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;: These should  include: (1) Self-assessment by the government department in charge of implementing the technology, (2)Development of meaningful inter-disciplinary external researcher review mechanisms, (3) Notice to the public regarding  self-assessment and external review, (4)Soliciting of public comments for clarification or concerns, (5) Special regard to vulnerable communities who may not be able to exercise their voice in public proceedings. An adequate review mechanism which holistically evaluates the impact of AI would ideally include all five of these components in conjunction with each other.&lt;/p&gt;
&lt;h5 style="text-align: justify; "&gt;&lt;b&gt;Regulation of Algorithms&lt;/b&gt;&lt;/h5&gt;
&lt;p style="text-align: justify; "&gt;Experts have voiced concerns about AI mimicking human prejudices due to the biases present in the Machine Learning algorithms. Scientists have revealed through their research that machine learning algorithms can imbibe gender and racial prejudices which are ingrained in language patterns or data collection processes. Since AI and machine algorithms are data driven, they arrive at results and solutions based on available &lt;br /&gt; and historical data. When this data itself is biased, the solutions presented by the AI will also be biased. While this is inherently discriminatory, scientists have provided solutions to rectify these biases which can occur at various stages by introducing a counter bias at another stage. It has also been suggested that data samples should be shaped in such a manner so as to minimise the chances of algorithmic bias. Ideally regulation of algorithms could be tailored - explainability, traceability, scrutability. We recommend that the national strategy on AI policy must take these factors into account and combination of a central agency driving the agenda, and sectoral actors framing regulations around specific uses of AI that are problematic and implementation is required.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As the government begins to adopt AI into governance - the extent to which and the  circumstances autonomous decision making capabilities can be delegated to AI need to be questioned. Questions on whether AI should be autonomous, should always have a human in the loop, and should have a ‘kill-switch’ when used in such contexts also need to be answered. A framework or high level principles can help to guide these determinations. For example:&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;Modeling Human Behaviour: An AI solution trying to model human behaviour, as in the case of judicial decision-making or predictive policing may need to be more regulated, adhere to stricter standards, and need more oversight than an algorithm that is trying to predict ‘natural’ phenomenon such as traffic congestion or weather patterns.&lt;/li&gt;
&lt;li&gt;Human Impact: An AI solution which could cause greater harm if applied erroneously-such as a robot soldier that mistakenly targets a civilian requires a different level and framework of regulation  than an AI solution  designed to create a learning path for a student in the education sector and errs in making an appropriate assessment.. &lt;/li&gt;
&lt;li&gt;Primary User: AI solutions whose primary users are state agents attempting to discharge duties in the public interest such as policemen, should be approached with more caution than those used by individuals such as farmers getting weather alerts&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Fairness&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;It is possible to incorporate broad definitions of fairness into a wide range of data analysis and classification systems.&lt;a href="#_ftn65" name="_ftnref65"&gt;&lt;sup&gt;&lt;sup&gt;[65]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While there can be no bright-line rules that will necessarily enable the operator or designer of a Machine Learning System to arrive at an ex ante determination of fairness, from a public policy perspective, there must be a set of rules or best practices that explain how notions of fairness should be utilised in the real world applications of AI-driven solutions.&lt;a href="#_ftn66" name="_ftnref66"&gt;&lt;sup&gt;&lt;sup&gt;[66]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; While broad parameters should be encoded by the developer to ensure compliance with constitutional standards, it is also crucial that the functioning of the algorithm allows for an ex-post determination of fairness by an independent oversight body if the impact of the AI driven solution is challenged.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Further, while there is no precedent on this anywhere in the world, India could consider establishing a Committee entrusted with the specific task of continuously evaluating the operation of AI-driven algorithms. Questions that the government would need to answer with regard to this body include:&lt;/p&gt;
&lt;ul style="text-align: justify; "&gt;
&lt;li&gt;What should the composition of the body be?&lt;/li&gt;
&lt;li&gt;What should be the procedural mechanisms that govern the operation of the body?&lt;/li&gt;
&lt;li&gt;When should the review committee step in? This is crucial because excessive review may re-entrench the bureaucracy that the AI driven solution was looking to eliminate.&lt;/li&gt;
&lt;li&gt;What information will be necessary for the review committee to carry out its determination? Will there be conflicts with IP, and if so how will these be resolved?&lt;/li&gt;
&lt;li&gt;To what degree will the findings of the committee be made public?&lt;/li&gt;
&lt;li&gt;What powers will the committee have? Beyond making determinations, how will these be enforced?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style="text-align: justify; "&gt;&lt;b&gt;Market incentives&lt;/b&gt;&lt;/h3&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Standards as a means to address data issues&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;With digitisation of legacy records and the ability to capture more granular data digitally, one of the biggest challenges facing Big Data is a lack of standardised data and interoperability frameworks. This is particularly true in the healthcare and medicine sector where medical records do not follow a clear standard, which poses a challenge to their datafication and analysis. The presence of developed standards in data management and exchange,  interoperable Distributed Application Platform and Services, Semantic related standards for markup, structure, query, semantics, Information access and exchange have been spoken of as essential to address the issues of lack of standards in Big Data.&lt;a href="#_ftn67" name="_ftnref67"&gt;&lt;sup&gt;&lt;sup&gt;[67]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Towards enabling usability of data, it is important that clear data standards are established. This has been recognized by Niti Aayog in its National Strategy for AI. On one hand, there can operational issues with allowing each organisation to choose their own specific standards to operate under, while on the other hand, non-uniform digitisation of data will also cause several practical problems, most primarily to do with interoperability of the individual services, as well as their usability. For instance, in the healthcare sector, though India has adopted an EHR policy, implementation of this policy is not yet harmonized - leading to different interpretations of ‘digitizing records (i.e taking snapshots of doctor notes), retention methods and periods, and comprehensive implementation across all hospital data. Similarly, while independent banks and other financial organisations are already following, or in the process of developing internal practices,there exist no uniform standards for digitisation of financial data. As AI development, and application becomes more mainstream in the financial sector, the lack of a fixed standard could create significant problems.&lt;/p&gt;
&lt;h4 style="text-align: justify; "&gt;&lt;b&gt;Better Design Principles in Data Collection&lt;/b&gt;&lt;/h4&gt;
&lt;p style="text-align: justify; "&gt;An enduring criticism of the existing notice and consent framework has been that long, verbose and unintelligible privacy notices are not efficient in informing individuals and helping them make rational choices. While this problem predates Big Data, it has only become more pronounced in recent times, given the ubiquity of data collection and implicit ways in which data is being collected and harvested. Further, constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things amplify the usability issues of the privacy notices. Some of the issues with privacy notices include Notice complexity, lack of real choices, notices decoupled from the system collecting data etc. An industry standard for a design approach to privacy notices which includes looking at factors such as the timing of the notice, the channels used for communicating the notices, the modality (written, audio, machine readable, visual) of the notice and whether the notice only provides information or also include choices within its framework, would be of great help.  Further, use of privacy by design principles can be done not just at the level of privacy notices but at each step of the information flow, and the architecture of the system can be geared towards more privacy enhanced choices.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref1" name="_ftn1"&gt;&lt;sup&gt;&lt;sup&gt;[1]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref2" name="_ftn2"&gt;&lt;sup&gt;&lt;sup&gt;[2]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf"&gt;https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref3" name="_ftn3"&gt;&lt;sup&gt;&lt;sup&gt;[3]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf"&gt;https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref4" name="_ftn4"&gt;&lt;sup&gt;&lt;sup&gt;[4]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref5" name="_ftn5"&gt;&lt;sup&gt;&lt;sup&gt;[5]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.nedo.go.jp/content/100865202.pdf"&gt;http://www.nedo.go.jp/content/100865202.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref6" name="_ftn6"&gt;&lt;sup&gt;&lt;sup&gt;[6]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.eu-robotics.net/sparc/10-success-stories/european-robotics-creating-new-markets.html?changelang=2&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref7" name="_ftn7"&gt;&lt;sup&gt;&lt;sup&gt;[7]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy"&gt;https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref8" name="_ftn8"&gt;&lt;sup&gt;&lt;sup&gt;[8]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/"&gt;https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref9" name="_ftn9"&gt;&lt;sup&gt;&lt;sup&gt;[9]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.uaeai.ae/en/"&gt;http://www.uaeai.ae/en/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref10" name="_ftn10"&gt;&lt;sup&gt;&lt;sup&gt;[10]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.aisingapore.org/"&gt;https://www.aisingapore.org/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref11" name="_ftn11"&gt;&lt;sup&gt;&lt;sup&gt;[11]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://news.joins.com/article/22625271"&gt;https://news.joins.com/article/22625271&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref12" name="_ftn12"&gt;&lt;sup&gt;&lt;sup&gt;[12]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf"&gt;https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref13" name="_ftn13"&gt;&lt;sup&gt;&lt;sup&gt;[13]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe"&gt;https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe&lt;/a&gt; &lt;a href="https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/"&gt;https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref14" name="_ftn14"&gt;&lt;sup&gt;&lt;sup&gt;[14]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.aitf.org.in/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref15" name="_ftn15"&gt;&lt;sup&gt;&lt;sup&gt;[15]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref16" name="_ftn16"&gt;&lt;sup&gt;&lt;sup&gt;[16]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref17" name="_ftn17"&gt;&lt;sup&gt;&lt;sup&gt;[17]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref18" name="_ftn18"&gt;&lt;sup&gt;&lt;sup&gt;[18]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref19" name="_ftn19"&gt;&lt;sup&gt;&lt;sup&gt;[19]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref20" name="_ftn20"&gt;&lt;sup&gt;&lt;sup&gt;[20]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe"&gt;https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref21" name="_ftn21"&gt;&lt;sup&gt;&lt;sup&gt;[21]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://pib.nic.in/newsite/PrintRelease.aspx?relid=181007&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref22" name="_ftn22"&gt;&lt;sup&gt;&lt;sup&gt;[22]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ryan Calo, 2017 Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis L. Review,&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Vol. 51, pp. 398 - 435.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref23" name="_ftn23"&gt;&lt;sup&gt;&lt;sup&gt;[23]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://trai.gov.in/sites/default/files/CIS_07_11_2017.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref24" name="_ftn24"&gt;&lt;sup&gt;&lt;sup&gt;[24]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref25" name="_ftn25"&gt;&lt;sup&gt;&lt;sup&gt;[25]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref26" name="_ftn26"&gt;&lt;sup&gt;&lt;sup&gt;[26]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://martechtoday.com/bottos-launches-a-marketplace-for-data-to-train-ai-models-214265&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref27" name="_ftn27"&gt;&lt;sup&gt;&lt;sup&gt;[27]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref28" name="_ftn28"&gt;&lt;sup&gt;&lt;sup&gt;[28]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Implicit Bias Problem, 93 WASH. L. REV. (forthcoming 2018) (manuscript at 23, 27-32),&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938"&gt;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref29" name="_ftn29"&gt;&lt;sup&gt;&lt;sup&gt;[29]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref30" name="_ftn30"&gt;&lt;sup&gt;&lt;sup&gt;[30]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; H. Brendan McMahan, et al., Communication-Efficient Learning of Deep Networks&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;from Decentralized Data, arXiv:1602.05629 (Feb. 17, 2016), &lt;a href="https://arxiv.org/abs/1602.05629"&gt;https://arxiv.org/abs/1602.05629&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref31" name="_ftn31"&gt;&lt;sup&gt;&lt;sup&gt;[31]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;i&gt;Id&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref32" name="_ftn32"&gt;&lt;sup&gt;&lt;sup&gt;[32]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Pierre N. Leval, Nimmer Lecture: Fair Use Rescued, 44 UCLA L. REV. 1449, 1457 (1997).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref33" name="_ftn33"&gt;&lt;sup&gt;&lt;sup&gt;[33]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref34" name="_ftn34"&gt;&lt;sup&gt;&lt;sup&gt;[34]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref35" name="_ftn35"&gt;&lt;sup&gt;&lt;sup&gt;[35]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Discussion Paper on National Strategy for Artificial Intelligence | NITI Aayog | National Institution for Transforming India. (n.d.) p. 54. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref36" name="_ftn36"&gt;&lt;sup&gt;&lt;sup&gt;[36]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Leverhulme Centre for the Future of Intelligence, http://lcfi.ac.uk/.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref37" name="_ftn37"&gt;&lt;sup&gt;&lt;sup&gt;[37]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; AI Now, https://ainowinstitute.org/.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref38" name="_ftn38"&gt;&lt;sup&gt;&lt;sup&gt;[38]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref39" name="_ftn39"&gt;&lt;sup&gt;&lt;sup&gt;[39]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; http://iridescentlearning.org/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref40" name="_ftn40"&gt;&lt;sup&gt;&lt;sup&gt;[40]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref41" name="_ftn41"&gt;&lt;sup&gt;&lt;sup&gt;[41]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Points, L., &amp;amp; Potton, E. (2017). Artificial intelligence and automation in the UK.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref42" name="_ftn42"&gt;&lt;sup&gt;&lt;sup&gt;[42]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/files/ai-and-healtchare-report"&gt;https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref43" name="_ftn43"&gt;&lt;sup&gt;&lt;sup&gt;[43]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Goudarzi, S., Hickok, E., and Sinha, A., AI in the Banking and Finance Industry in India,  Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance"&gt;https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref44" name="_ftn44"&gt;&lt;sup&gt;&lt;sup&gt;[44]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at &lt;a href="https://cis-india.org/internet-governance/files/ai-and-healtchare-report"&gt;https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref45" name="_ftn45"&gt;&lt;sup&gt;&lt;sup&gt;[45]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://news.microsoft.com/en-in/government-karnataka-inks-mou-microsoft-use-ai-digital-agriculture/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref46" name="_ftn46"&gt;&lt;sup&gt;&lt;sup&gt;[46]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://news.microsoft.com/en-in/government-telangana-adopts-microsoft-cloud-becomes-first-state-use-artificial-intelligence-eye-care-screening-children/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref47" name="_ftn47"&gt;&lt;sup&gt;&lt;sup&gt;[47]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; NITI Aayog. (2018). Discussion Paper on National Strategy for Artificial Intelligence. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper. 18&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref48" name="_ftn48"&gt;&lt;sup&gt;&lt;sup&gt;[48]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref49" name="_ftn49"&gt;&lt;sup&gt;&lt;sup&gt;[49]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref50" name="_ftn50"&gt;&lt;sup&gt;&lt;sup&gt;[50]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; J. Schradie, The Digital Production Gap: The Digital Divide and Web 2.0 Collide. Elsevier Poetics, 39 (1).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref51" name="_ftn51"&gt;&lt;sup&gt;&lt;sup&gt;[51]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; D Lazer, et al., The Parable of Google Flu: Traps in Big Data Analysis. Science. 343 (1).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref52" name="_ftn52"&gt;&lt;sup&gt;&lt;sup&gt;[52]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Danah Boyd and Kate Crawford,  Critical Questions for Big Data. Information, Communication &amp;amp; Society. 15 (5).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref53" name="_ftn53"&gt;&lt;sup&gt;&lt;sup&gt;[53]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; John Podesta, (2014) Big Data: Seizing Opportunities, Preserving Values, available at&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf"&gt;http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref54" name="_ftn54"&gt;&lt;sup&gt;&lt;sup&gt;[54]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; E. Ramirez, (2014) FTC to Examine Effects of Big Data on Low Income and Underserved Consumers at September Workshop, available at &lt;a href="http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers"&gt;http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref55" name="_ftn55"&gt;&lt;sup&gt;&lt;sup&gt;[55]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; M. Schrage, Big Data’s Dangerous New Era of Discrimination, available at &lt;a href="http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/"&gt;http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref56" name="_ftn56"&gt;&lt;sup&gt;&lt;sup&gt;[56]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Google/DoubleClick Merger case&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref57" name="_ftn57"&gt;&lt;sup&gt;&lt;sup&gt;[57]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; French Competition Authority, Opinion n°10-A-13 of 1406.2010,&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;http://www.autoritedelaconcurrence.fr/pdf/avis/10a13.pdf. That opinion of the Authority aimed at&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;giving general guidance on that subject. It did not focus on any particular market or industry&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;although it described a possible application of its analysis to the telecom industry.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref58" name="_ftn58"&gt;&lt;sup&gt;&lt;sup&gt;[58]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf"&gt;http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref59" name="_ftn59"&gt;&lt;sup&gt;&lt;sup&gt;[59]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., ... &amp;amp; Wood, A. (2017). Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref60" name="_ftn60"&gt;&lt;sup&gt;&lt;sup&gt;[60]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Frank A. Pasquale ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’ (July 14, 2017). Ohio State Law Journal, Vol. 78, 2017; U of Maryland Legal Studies Research Paper No. 2017-21, 7.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref61" name="_ftn61"&gt;&lt;sup&gt;&lt;sup&gt;[61]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Oswald, M., Grace, J., Urwin, S., &amp;amp; Barnes, G. C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Information &amp;amp; Communications Technology Law, 27(2), 223-250.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref62" name="_ftn62"&gt;&lt;sup&gt;&lt;sup&gt;[62]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref63" name="_ftn63"&gt;&lt;sup&gt;&lt;sup&gt;[63]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Abraham S., Hickok E., Sinha A., Barooah S., Mohandas S., Bidare P. M., Dasgupta S., Ramachandran V., and Kumar S., NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy. Retrieved from https://cis-india.org/internet-governance/files/niti-aayog-discussion-paper.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref64" name="_ftn64"&gt;&lt;sup&gt;&lt;sup&gt;[64]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Reisman D., Schultz J., Crawford K., Whittaker M., (2018, April) Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref65" name="_ftn65"&gt;&lt;sup&gt;&lt;sup&gt;[65]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Sample I., (2017, November 5) Computer says no: why making AIs fair, accountable and transparent is crucial. Retrieved from &lt;a href="https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial"&gt;https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref66" name="_ftn66"&gt;&lt;sup&gt;&lt;sup&gt;[66]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., &amp;amp; Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165, 633.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="#_ftnref67" name="_ftn67"&gt;&lt;sup&gt;&lt;sup&gt;[67]&lt;/sup&gt;&lt;/sup&gt;&lt;/a&gt; &lt;a href="http://www.iso.org/iso/big_data_report-jtc1.pdf"&gt;http://www.iso.org/iso/big_data_report-jtc1.pdf&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda'&gt;https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Elonnai Hickok and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-05T15:39:59Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/first-post-october-12-2017-ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data">
    <title>Ahead of data protection law roll out, experts caution that it shouldn't limit collection and use of data</title>
    <link>https://cis-india.org/internet-governance/news/first-post-october-12-2017-ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data</link>
    <description>
        &lt;b&gt;With India planning to roll out a new data protection regime following the landmark Supreme Court judgment upholding right to privacy as fundamental right, experts have cautioned that the new law should not limit collection and use of data.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was &lt;a class="external-link" href="http://www.firstpost.com/tech/news-analysis/ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data-4134753.html"&gt;published by First Post&lt;/a&gt; on October 12, 2017. Sunil Abraham was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;"The new data protection law should have data-driven innovation at its core," said Kamlesh Bajaj, Founder-CEO, Data Security Council of India (DSCI).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"It should not limit data collection and use, but limit harm to citizens," Bajaj added at a seminar on "Data Protection and Privacy" organised by non-profit industry body Internet and Mobile Association of India (IAMAI).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In a major boost to individual freedom, the Supreme Court in August declared that right to privacy was a fundamental right and protected as an intrinsic part of life and personal liberty and freedoms guaranteed by the Constitution.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"The Supreme Court judgment calls for production of a new law," said Sunil Abraham, Executive Director of Bangaluru-based research organisation, Centre for Internet and Society (CIS).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The experts noted that the Supreme Court judgment remains meaningless for digital Indians without a proper data protection law in place as all other existing laws, such as the Information Technology Act, 2000, do not adequately address the question of right to privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Recognising the importance of data protection and keeping personal data of citizens secure and protected, the Ministry of Electronics and Information Technology (MeitY) on 31 July, constituted a Committee of Experts under the chairmanship of its former judge Justice BN Srikrishna to study and identify key data protection issues and recommend methods for addressing them. The committee will also suggest a draft Data Protection Bill.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"While the regulator should be given tools to make companies behave better, it should not start with harsh punitive actions," Abraham noted, adding that big fines could challenge the very logic of regulation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In a question to whether a robust data protection regime should come in conflict with issue such as national security, he said that lawmakers should find a way to maximise both imperatives.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"Surveillance is like salt in cooking. It is necessary, but in limited quantity," he added.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Participating in a chat with Google's Public Policy Director Chetan Krishnaswamy at the event, MP Rajeev Chandrasekhar, however, said that regulation should start with the process of data collection itself and consumers cannot be expected to demonstrate harm or inappropriate use of their data to enjoy the right to privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"It should not be a free run for companies to mine consumer data," the independent Rajya Sabha member said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He emphasised that the process of formulating a data protection law is as important as the law itself and all stakeholders should be able to openly put forward their views and apprehensions and it is only with such a consultative process that the opportunities for the technology space can be safeguarded.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/first-post-october-12-2017-ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data'&gt;https://cis-india.org/internet-governance/news/first-post-october-12-2017-ahead-of-data-protection-law-roll-out-experts-caution-that-it-shouldnt-limit-collection-and-use-of-data&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-01-02T15:20:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/bloomberg-quint-nishant-sharma-september-27-2018-after-sc-setback-fintech-firms-await-clarity-on-aadhaar">
    <title>After Supreme Court Setback, Fintech Firms Await Clarity On Aadhaar</title>
    <link>https://cis-india.org/internet-governance/news/bloomberg-quint-nishant-sharma-september-27-2018-after-sc-setback-fintech-firms-await-clarity-on-aadhaar</link>
    <description>
        &lt;b&gt;The 12-digit Aadhaar number is now out of bounds for fintech companies in India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Nishant Sharma was &lt;a class="external-link" href="https://www.bloombergquint.com/aadhaar/after-supreme-court-setback-fintech-firms-await-clarity-on-aadhaar"&gt;published in Bloomberg Quint&lt;/a&gt; on September 27, 2018. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Video&lt;/h3&gt;
&lt;p&gt;&lt;iframe frameborder="0" height="315" src="https://www.youtube.com/embed/FiEbZcL3lnY" width="560"&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;With the Supreme Court on Wednesday terming Aadhaar authentication by private companies as “&lt;a href="https://www.bloombergquint.com/law-and-policy/2018/09/26/aadhaar-a-quick-summary-of-the-supreme-court-majority-order" target="_blank"&gt;unconstitutional&lt;/a&gt;”,  companies such as online wallets and e-tailers, among others, will now  have to make changes to how they onboard and verify customers, in  addition to how they transact.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In a 567-page majority judgment  authored by Justice Sikri and concurred upon by two other judges—Chief  Justice Dipak Misra and Justice AM Khanwilkar—it said that Section 57 of  the Aadhaar Act, which allows private companies to use Aadhaar for  authentication services based on a contract between the corporate and an  individual, would enable commercial exploitation of private data and  hence is unconstitutional.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“What it essentially means is that the  private bodies, such as lending platforms, wallets, or any private  entity, cannot use Aadhaar for authentication,” said Anirudh Rastogi  founder at Ikigai Law (formerly TRA), a law firm that specialises in  representing businesses on data privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The decision is set to  impact private companies right from Flipkart-owned PhonePe, Paytm,  Reliance Jio and Amazon, among others, which rely on Aadhaar for  e-verification. Amazon recently launched cardless equated monthly  installments on Amazon Pay through the digital finance platform Capital  Float and asked customers to provide Aadhaar numbers or virtual ID and  PAN details on the Amazon app for verification.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;'Aadhaar Is Just Another ID'&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Pranesh  Prakash, fellow, Centre for Internet and Society, said that with this  judgment Aadhaar is no longer an identity infrastructure as its creators  have dreamt of. “It is now just another ID.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For those opposed to  Aadhaar, on privacy and security grounds, this may be a part victory.  But for the Fintech industry it stymies the use of quick Aadhaar-based  e-KYC (know your customer norms) to onboard customers. “The fintech  industry thrives on the instant paperless mantra, and this move will  curb its rapid growth, ” Amrish Rau, co-founder of PayU, said in a text  message.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The verdict is also set to push up costs for the  industry. Rau said: “Conducting physical KYC would be a costly affair,  with every physical KYC costing about Rs 100 per person.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Companies  like PhonePe await more clarity. “We are waiting to hear from bodies  like the Reserve Bank of India, UIDAI on what KYC that will be required  for wallets moving ahead," Sameer Nigam, cofounder of PhonePe, said.  "Whether we go to no KYC, lower limit environment or go to the physical  KYC environment."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The  judgment also stated that the identification number will not be  mandatory for opening bank accounts, mobile-phone connections or for  admissions into educational institutions. However, Aadhaar will continue  to be mandatory for the distribution of state-sponsored welfare schemes  including direct benefit transfers and the public distribution system.  Taxpayers will have to link their Permanent Account Numbers to the  biometric database.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Aadhaar-Based KYC: Allowed With Consent?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The  Supreme Court has concluded that the part of section 57 which enables  body corporate and individuals also to seek authentication, that too on  the basis of a contract between the individual and such body corporate  or person, would impinge upon the right to privacy of such individuals.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Prasanna  S, a Supreme Court advocate and lawyer for one of the petitioners in  the Aadhaar matter interpreted it to mean that even if a customer  voluntarily wants to use Aadhaar for e-KYC, businesses cannot accept it.&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;They  have struck down the part of Section 57 that allows use of Aadhaar  based on a contract. A contract, by nature is voluntary, But since the  court has struck down this part, even voluntary use won’t be permitted.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Prasanna S, Advocate, Supreme Court&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Jaitley Hints At Legal Backing&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Meanwhile,  Finance Minister Arun Jaitley on Wednesday hinted that the Centre is  likely to examine whether separate legal backing is needed for Section  57 of the Aadhaar Act, the newswire PTI reported. “So, let us first read  the judgement. There are two-three prohibited areas. Are they because  they are totally prohibited or are they because they need legal  backing,” Jaitley was quoted as saying.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Rastogi of Ikigai Law said  that the court has left open for the government to promulgate a law to  enable private parties to use Aadhaar that can withstand judicial  scrutiny.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Rahul  Matthan, a technology partner at law firm Trilegal differed with this  view. He said that since the apex court has ruled that private entities  cannot access the Aadhaar infrastructure, it means that even if the  government brings a specific law to allow for that, it would be  unconstitutional.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Prasanna agreed with this interpretation.&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;The  court has hinted that commercial exploitation of personal information  will fail the proportionality test laid down by it in the Right to  Privacy judgment. This is one of the grounds for them to conclude that  Section 57 is unconstitutional. So even a law is introduced, private  access will be impermissible.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Prasanna S, Advocate, Supreme Court&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Are Aadhaar-Based KYCs Tainted?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Since  the use of Aadhaar by private entities has been struck down, does it  mean entities who have used it for KYC so far have to re-do that  exercise? And data that was collected as part of Aadhaar-based KYC- does  that need to be deleted?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The majority order hasn’t specifically  addressed these questions, Matthan pointed out. But went on to explain  that his reading of the judgment is that the court wants things to  remain as they are.&lt;/p&gt;
&lt;blockquote style="text-align: justify; "&gt;The  Supreme Court has said that collection of data before the Aadhaar Act  was introduced is valid. If you follow that sentiment, may be we can  argue that there’s no requirement to delete the data.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Rahul Matthan, Partner, Trilegal&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;br /&gt;Whatever  has been done without the authority of law has to go, Prasanna said.  But this outcome may not be practical and another hearing before the  Supreme Court may be required to clear these questions, he added.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Private  entities such as the online cab aggregator Ola have already removed  eKYC from its e-wallet when BloombergQuint last checked. Others may  follow suit.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/bloomberg-quint-nishant-sharma-september-27-2018-after-sc-setback-fintech-firms-await-clarity-on-aadhaar'&gt;https://cis-india.org/internet-governance/news/bloomberg-quint-nishant-sharma-september-27-2018-after-sc-setback-fintech-firms-await-clarity-on-aadhaar&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-10-01T23:39:42Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access">
    <title>After data leak row, Facebook imposes restrictions on user data access</title>
    <link>https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access</link>
    <description>
        &lt;b&gt;MEIT issues notice to Facebook even as experts debate absolute impact on the second largest developer community.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Romita Majumdar and Kiran Rathee was published in &lt;a class="external-link" href="http://www.business-standard.com/article/current-affairs/after-data-leak-row-facebook-imposes-restrictions-on-user-data-access-118040500950_1.html"&gt;Business Standard&lt;/a&gt; on April 6, 2018.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Social media giant &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has finally reacted to the global storm around its data privacy policies by bringing in a new set of restrictions on developers and data aggregators using the platform for data harvesting.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Two weeks ago we promised to take a hard look at the information apps can use when you connect them to &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;as well as other data practices. We will remove a developer’s ability to request data people shared with them if it appears they have not used the app in the last 3 months,” said &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;Chief Technology Officer &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=mark+schroepfer" target="_blank"&gt;Mark Schroepfer &lt;/a&gt;in a blog.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;iframe frameborder="0" height="1" marginheight="0" marginwidth="0" scrolling="no" title="3rd party ad content" width="1"&gt;&lt;/iframe&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;&lt;span&gt;has also disabled the feature to search a user by their email address or phone number which has been abused by malicious actors and reduced the overall control that the app will have on user data.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;has also submitted its response to the Indian government saying over 500,000 people in India have been potentially affected by the data breach involving &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica.&lt;/a&gt; The government sources said as the social networking firm has now accepted that Indians’ data was compromised; it makes the issue much more important and serious. “We will wait for Cambridge Analytica’s reply and then, we will take our stand,” sources in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=electronics" target="_blank"&gt;Electronics &lt;/a&gt;and IT Ministry said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Ministry had issued notices to both &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;and Cambridge Analytica, seeking their responses regarding the data breach of Indians and if it was used to influence elections.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The new set of restrictions clamp down on how much data app developers access on the platform and also prevent third part data providers from offering targeted marketing services on &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"India is the second largest &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;developer base and the restriction on users' data access is going to impact all of them. There will be more scrutiny in &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;apps, leading to slower approvals. Virality will reduce as explicit consent will be required for accessing friends' data and contacts list, “ said Vivek Prakash, CTO and Co-Founder, HackerEarth.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He added that there could be tighter terms of service making developers also liable for unauthorized processing of data that they collect from the apps.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Executive Director of Center for Internet and Society Sunil Abraham says that while &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;says “apps need to agree to strict requirements” and “tightening our review process” it is still not clear what these requirements are. “Instead of the promised link to whether user data was accessed by Cambridge Analytica, it would make sense for them to say &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=facebook" target="_blank"&gt;Facebook &lt;/a&gt;holds W number of records across X databases over the time period Y, which totals Z Gb while explaining what these variables stand for,” he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Consumer data marketing company Hansa Cequity believes that digital marketing arms of most companies will finally have to consider building their own user database given the strict clampdown on third party data.“Businesses can no more use data from third party aggregators for targeted advertising. Consumer goods and entertainment related brands are likely to face some impact because they depend on access to such data,” said S Swaminathan, Co-Founder and CEO, Hansa Cequity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some experts also believe that this move might force platforms like Twitter, &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=google" target="_blank"&gt;Google &lt;/a&gt;and &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=youtube" target="_blank"&gt;YouTube &lt;/a&gt;to rethink their policies on how much access they give advertisers and data aggregators to user data. Abraham also added that app developers and their investors have to evaluate business models that depend more on value to user rather than the amount of personal data harvested. The data that has already been harvested by the likes of &lt;a class="storyTags" href="http://www.business-standard.com/search?type=news&amp;amp;q=cambridge+analytica" target="_blank"&gt;Cambridge Analytica &lt;/a&gt;and other unknown parties, however, is beyond user control forever.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access'&gt;https://cis-india.org/internet-governance/news/business-standard-romita-majumdar-and-kiran-rathee-after-data-leak-row-facebook-imposes-restrictions-on-user-data-access&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Facebook</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-04-07T15:30:31Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/biometric-update-june-26-2021-chris-burt-advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions">
    <title>Advanced biometric technologies and new market entries tackle fraud, chase digital ID billions</title>
    <link>https://cis-india.org/internet-governance/news/biometric-update-june-26-2021-chris-burt-advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions</link>
    <description>
        &lt;b&gt;Amid forecasts of rapid growth and huge market potential, digital ID platforms launches by Techsign and Ping Identity, new services, features and even an investment fund have been launched.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Chris Burt was &lt;a class="external-link" href="https://www.biometricupdate.com/202106/advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions"&gt;published by Biometric Update&lt;/a&gt; on June 26, 2021.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A new camera solution for under-display 3D face biometrics from Infineon and partners, and IPO filings by Clear and SenseTime show parallel investment activity in biometrics, meanwhile, and experts from Veridium and Intellicheck provide insight into the shifting technology and fraud landscapes, among the most widely-read stories this week on Biometric Update.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Top biometrics news of the week&lt;/h2&gt;
&lt;p style="text-align: justify; "&gt;Several areas of the digital identity market continued to be very active, with a new investment fund launched to support startups in digital commerce and payments, Yoti joining a regulatory sandbox, Techsign launching a digital ID platform, and Mastercard and b.well reporting positive results from a recent pilot for their biometric healthcare platform. All this activity contributes to explaining Juniper Research’s &lt;a href="https://www.biometricupdate.com/202106/digital-identity-verification-market-forecast-to-reach-16-7b-by-2026"&gt;forecast of rapid growth&lt;/a&gt; in the sector to $16.7 billion in 2026, driven largely by spending on remote onboarding.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Okta CEO Todd McKinnon, meanwhile, told Barron’s that the total addressable market for identity and access management providers like Okta is something like &lt;a href="https://www.biometricupdate.com/202106/okta-ceo-says-total-addressable-identity-and-access-management-market-near-80b"&gt;$80 billion&lt;/a&gt;, as well as that effective integration is the key to solving biometrics challenges in the space. Entrust and Yubico formed an integration partnership, LoginRadius launched a new feature, Jamf launched a biometric tool for enterprises, and a certification program for IAM professionals was launched.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A list of goods for sale on the dark web includes a listing for &lt;a href="https://www.biometricupdate.com/202106/biometric-selfies-and-forged-passports-identities-for-sale-on-the-dark-web"&gt;selfies holding an American ID credential&lt;/a&gt;, which in theory could be used in a biometric spoofing attack. Cybersecurity researcher Luana Pascu helps guide readers through the report, and shares insights such as on the status of faked vaccination certificates on dark web marketplaces.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Ensuring the validity of the ID document a biometric identity verification process is based on, without adding too much friction, often means adopting &lt;a href="https://www.biometricupdate.com/202106/intellicheck-ceo-on-building-the-foundations-for-biometric-verification-and-fraud-protection"&gt;layered risk profiling&lt;/a&gt;, Intellicheck CEO Bryan Lewis tells &lt;em&gt;Biometric Update&lt;/em&gt; in a sponsored post. The company has deep roots in detecting fraudulent documents and has found that even scanning the barcode on an identity document will not necessarily catch a fake if the unique security elements are not validated as part of the scan.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Fourthline Anti-Financial Crime Head Ro Paddock writes in a Biometric Update guest post about the ever-increasing sophistication of fraud attacks, which reached the level of computer-generated &lt;a href="https://www.biometricupdate.com/202106/the-fraudsters-new-game-face"&gt;3D masks and deepfakes&lt;/a&gt; during the pandemic,. In response, information-sharing between organizations will be necessary to understand the scope of these new threats, and how to defend against them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Philippines’ election commission has launched an app to allow people to preregister for the &lt;a href="https://www.biometricupdate.com/202106/philippines-launches-app-to-fast-track-biometric-voter-registration"&gt;voter roll online&lt;/a&gt; before enrolling their biometrics in person, as the country continues digitizing its public services. Governments in Pakistan, Haiti and Nigeria are also making moves to improve the accessibility and trustworthiness of their electoral processes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A partnership between Research ICT Africa and the Centre for Internet and Society, supported by the Omidyar Network, to explore the development of digital ID systems for the African context is explained in a &lt;a href="https://researchictafrica.net/2021/06/21/why-digital-id-matters/" target="_blank"&gt;blog post&lt;/a&gt;. The project will be based on an adaptation of the Evaluation Framework for Digital Identities which the CIS used to assess India’s Aadhaar system, with rule of law, rights and risk-based tests, and presented in a series of posts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Details of Clear’s IPO plans emerged, including its intention to raise up to &lt;a href="https://www.biometricupdate.com/202106/clear-ipo-could-raise-up-to-396m-in-hot-biometrics-investment-market"&gt;$396 million&lt;/a&gt; on the NYSE. The $2.2 billion valuation aligns with some comparable companies, by revenue multiple, but the lower voting power of the shares on offer could be a restraining factor.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;An even bigger IPO could be held by SenseTime later this year, with the Chinese AI firm looking to raise up to $2 billion &lt;a href="https://www.biometricupdate.com/202106/not-smarting-from-us-sanctions-sensetime-says-its-ipo-is-on-again"&gt;on the Hong Kong exchange&lt;/a&gt;. The company has been talking about a public stock launch since before the company was hit with restrictions to U.S. trade, which it indicates have had little impact.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The latest major funding round in digital identity is the largest yet, with &lt;a href="https://www.biometricupdate.com/202106/transmit-security-raises-543m-to-grow-biometric-passwordless-authentication"&gt;Transmit Security raising $543 million&lt;/a&gt; at a $2.2 billion valuation to expand the market reach of its passwordless biometric authentication technology. The company claims it is the highest ever Series A funding round in cybersecurity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Bob Eckel, Aware CEO and International Biometrics + Identity Association (IBIA) Director and Board Member, discusses why people should own their own identity, identifying things and protecting supply chains, and his background in setting up air traffic control systems used all over the world with the Requis &lt;a href="https://requis.com/podcasts/podcast-bob-eckel-biometrics-future-secured-identities/" target="_blank"&gt;Supply Chain Next podcast&lt;/a&gt;. In the longer term Eckel sees biometric replacing passwords, and in the shorter term being used to make processes touchless.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Veridium CTO John Callahan guides Biometric Update through recent NIST guidance on the &lt;a href="https://www.biometricupdate.com/202106/nist-touchless-fingerprint-biometrics-guidance-confirms-interoperability"&gt;interoperable use of contactless fingerprints&lt;/a&gt; with contact-based back-end AFIS systems. The guidance, which changes definitions within the NIST ITL biometric container standard, but advises that the associated image quality metric does not apply to contactless prints, could spark further investment in the modality.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A new time-of-flight 3D imaging solution that could be used to implement facial authentication from &lt;a href="https://www.biometricupdate.com/202106/under-display-camera-for-3d-face-biometrics-developed-by-infineon-pmd-arcsoft"&gt;under the display of mobile devices&lt;/a&gt; without notches or bezels has been developed by partners Infineon, pmdtechnologies and ArcSoft. Based on the REAL3 sensor and ArcSoft’s computer vision algorithms, the solution is expected to reach availability in Q3 2021.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://www.biometricupdate.com/202106/ping-identity-adds-behavioral-biometrics-and-bot-detection-with-securedtouch-acquisition"&gt;Ping Identity has acquired SecuredTouch&lt;/a&gt; in a deal with undisclosed financial details to integrate its behavioral biometrics-based continuous user authentication with the PingOne enterprise cloud platform. Ping also launched a consumer application for reusable credentials and added unified management features to its cloud platform at its Identiverse 2021 event.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Notre Dame-IBM Technology Ethics Lab Founding Director Elizabeth Renieris joins the MIT Sloan Management Review’s &lt;a href="https://sloanreview.mit.edu/audio/starting-now-on-technology-ethics-elizabeth-renieris/" target="_blank"&gt;Me, Myself and AI podcast&lt;/a&gt; to discuss the role of the lab, her path past and through some of the digital identity space’s key ethical developments, and the need to take the long view on technology to understand its ethical implications. Renieris makes a pitch for process-oriented regulations, based on the best understanding we have at the time.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;ProctorU’s announcement that it will no longer sell fully-automated remote proctoring services is seen as a win in the battle against “the AI shell game” by the &lt;a href="https://www.eff.org/deeplinks/2021/06/long-overdue-reckoning-online-proctoring-companies-may-finally-be-here" target="_blank"&gt;Electronic Frontier Foundation&lt;/a&gt;. The descriptions of the balance between the automated and human decision-making by AI proctoring providers amount to doublespeak, the EFF says, before panning their human review processes, accuracy rates, and use of facial recognition.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/biometric-update-june-26-2021-chris-burt-advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions'&gt;https://cis-india.org/internet-governance/news/biometric-update-june-26-2021-chris-burt-advanced-biometric-technologies-and-new-market-entries-tackle-fraud-chase-digital-id-billions&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Chris Burt</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>UIDAI</dc:subject>
    
    
        <dc:subject>Biometrics</dc:subject>
    
    
        <dc:subject>Aadhaar</dc:subject>
    

   <dc:date>2021-06-28T01:13:05Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/abli-privacy-workshop">
    <title>ABLI Privacy Workshop</title>
    <link>https://cis-india.org/internet-governance/news/abli-privacy-workshop</link>
    <description>
        &lt;b&gt;On May 21 and 22, 2019, Elonnai Hickok, participated in the ABLI privacy workshop along with side events in Singapore.&lt;/b&gt;
        &lt;p&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/abli2019s-data-privacy-workshop"&gt;Click to view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/abli-privacy-workshop'&gt;https://cis-india.org/internet-governance/news/abli-privacy-workshop&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-06-05T07:29:18Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept">
    <title>Aadhar: Privacy is not a unidimensional concept</title>
    <link>https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept</link>
    <description>
        &lt;b&gt;Right to privacy is important not only for our negotiations with the information age but also to counter the transgressions of a welfare state. A robust right to privacy is essential for all Indian citizens to defend their individual autonomy in the face of invasive state actions purportedly for the public good.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article was published in the &lt;a class="external-link" href="http://economictimes.indiatimes.com/news/politics-and-nation/aadhar-privacy-is-not-a-unidimensional-concept/printarticle/59716562.cms"&gt;Economic Times&lt;/a&gt; on July 23, 2017.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The ruling of this nine-judge bench will have far-reaching impact on the extent and scope of rights available to us all. In a disappointing case of judicial evasion by the apex court, it has taken over 600 days since a reference order was passed in August 11, 2015, for this bench to be constituted. Over two days of arguments, the counsels for the petitioners have presented before the court why the right to privacy, despite not finding a mention in the Constitution of India, is a fundamental right essential to a person’s dignity and liberty, and must be read into not one but multiple articles of the Constitution. The government will make its arguments in the coming week.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One must wonder why we are debating the contours of the right to privacy, which 40 years of jurisprudence had lulled us into believing we already had. The answer to that can be found in a series of hearings in the Aadhaar case that began in 2012. Justice KS Puttaswamy, a former Karnataka High Court judge, filed a petition before the Supreme Court, questioning the validity of the Aadhaar project due its lack of legislative basis (since then the Aadhaar Act was passed in 2016) and its transgressions on our fundamental rights. Over time, a number of other petitions also made their way to the apex court, challenging different aspects of the Aadhaar project. Since then, five different interim orders by the Supreme Court have stated that no person should suffer because they do not have an Aadhaar number. Aadhaar, according to the court, could not be made mandatory to avail benefits and services from government schemes. Further, the court has limited the use of Aadhaar to specific schemes: LPG, PDS, MGNREGA, National Social Assistance Programme, the Pradhan Mantri Jan Dhan Yojna and EPFO.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The real spanner in the works in the progress of this case was the stand taken by Mukul Rohatgi, then attorney general of India who, in a hearing before the court in July 2015, stated that there is no constitutionally guaranteed right to privacy. His reliance was on two Supreme Court judgments in MP Sharma v Satish Chandra (1954) and Kharak Singh v State of Uttar Pradesh (1962): both cases, decided by eight- and six-judge benches respectively, denied the existence of a constitutional right to privacy. As the subsequent judgments which upheld the right to privacy were by smaller benches, Rohatgi claimed that MP Sharma and Kharak Singh still prevailed over them, until they were overruled by a larger bench.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The reference to a larger bench has since delayed the entire matter, even as a number of government schemes have made Aadhaar mandatory. This reading of privacy as a unidimensional concept by the courts is, with due respect, erroneous. Privacy, as a concept, includes within its scope, spatial, familial, informational and decisional aspects. We all have a legitimate expectation of privacy in our private spaces, such as our homes, and in our personal relationships. Similarly, we must be able to exercise some control over how personal data, like our financial information, are disseminated. Most importantly, privacy gives us the space to make autonomous choices and decisions without external interference. All these dimensions of privacy must stand as distinct rights. In MP Sharma, the court rejected a certain aspect of the right of privacy by refusing to acknowledge a right against search and seizure. This, in no way prevented the court, even in the form of a smaller bench, from ruling on any other aspects of privacy, including those that are relevant to the Aadhaar case.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The limited referral to this bench means that the court will have to rule on the status of privacy and its possible limitations in isolation, without even going into the details of the Aadhaar case (based on the nature of protection that this bench accords to privacy, the petitioners and defendants in the Aadhaar case will have to argue afresh on whether the project does impede on this most fundamental right). There are no facts of the case to ground the legal principles in, and defining the contours of a right can be a difficult exercise. The court must be wary of how any limits they put on the right may be used in future. Equally, it is important to articulate that any limitations on the right to privacy due to competing interests such as national security and public interest must be imposed only when necessary and always be proportionate.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It will not be enough for the court to merely state that we have a constitutional right to privacy. They would be well advised to cut through the muddle of existing privacy jurisprudence, and unequivocally establish the various facets of the right. Without that, we may not be able to withstand the modern dangers of surveillance, denial of bodily integrity and self-determination through forcible collection of information. The nine judges, in their collective wisdom, must not only ensure that we have a right to privacy, but also clearly articulate a robust reading of this right capable of withstanding the growing interferences with our autonomy.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept'&gt;https://cis-india.org/internet-governance/blog/economic-times-july-23-2017-amber-sinha-aadhar-privacy-is-not-a-unidimensional-concept&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>amber</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-08-23T01:50:19Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/news/aadhaar-truth">
    <title>Aadhaar’s moment of truth</title>
    <link>https://cis-india.org/news/aadhaar-truth</link>
    <description>
        &lt;b&gt;It’s time for the unique identity project to answer tough questions it has dodged so far, writes MA Arun in the Deccan Herald. &lt;/b&gt;
        
&lt;p&gt;On June 25, 2009, Prime Minister Manmohan Singh generated one of the biggest feel-good headlines of UPA2. He appointed former Infosys CEO Nandan Nilekani as Chairperson of Unique Identification Authority of India (UIDAI), which had been set up to assign a unique number to every resident of the country.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;UIDAI – billed as the world’s largest e-governance project – presented a numbing technical challenge. Fingerprint and iris samples of one billion plus Indian residents had to be collected along with details of name, gender, birth date and address. A unique identity had to be assigned to each resident in return and then authenticate it online whenever called for.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Nilekani using his stature in the IT industry assembled a smart team of engineers, who could take the challenge head on. He also started tirelessly crisscrossing the country promoting the project and tying up with different government agencies and PSUs.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;He addressed countless gatherings conveying a simple message: Indian growth has bypassed the poor and giving them legal identity was the first step in acknowledging their existence and making government services accessible.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;In the last two years, there has been a little change in his script and in the response of the audience, which has by and large remained breathless and adulatory. There have been a few jarring notes. Once in a while he is accosted by individuals and organisations, who say the project takes away their privacy. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;Most memorably, on January 7, 2011, Nilekani faced an uncharacteristically unruly audience at IISc, Bangalore, which demanded strong protection to privacy. People who attended the meeting found Nilekani evasive as protesting students waved placards outside the venue, urging him to go back. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;But for the media, this reporter included, the dissenting opinion from possibly fringe protesters, sounded exaggerated, making too much of a small issue, debating an academic issue of little practical value.&lt;/p&gt;
&lt;p&gt;Perhaps reflecting the larger prevailing sentiments on Aadhaar, Sujeet Pillai of Feecounter, says with the rise of social networking, privacy has already eroded. "We put more information on Facebook and Twitter than we share with Aadhaar. The benefits of the project outweigh the cost," he adds. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;Many say it is only the middleclass which worries about privacy, while the poor would be more concerned about the benefits. &amp;nbsp; Trying to address privacy concerns, Aadhaar officials have maintained they collect just basic details, enrollment is voluntary and information is encrypted. Your approval is required to authenticate your identity and while revealing who you are, the system just gives a yes or no response, they say.&lt;/p&gt;
&lt;p&gt;Over the last year Aadhaar has picked up steam and observers, who expected the bureaucracy to resist, given its anti-corruption overtone, are mildly surprised. Various government departments are embracing it in competition. Several central ministries, state governments, PSUs have begun to tie their programmes to the Aadhaar number.&lt;/p&gt;
&lt;p&gt;Aadhaar officials say they are on course to enroll 600 million by 2014 and by October this year they expect to start enrolling one million numbers a day. The pilot projects at Mysore, Tumkur and Hyderabad have already enrolled 85 per cent of the population and the project is ramping up to other districts and states.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Early last month the Cabinet Committee on Security in a seemingly unrelated move gave partial approval for a Home ministry project, National Intelligence Grid (Natgrid). The development alarmed the privacy advocates to again raise a cry over Aadhaar. Among other things, Natgrid, being run by an ex-army man, Capt Raghu Raman, reportedly seeks to integrate 21 databases - &amp;nbsp;railways, airlines, stock exchanges, income tax, bank account details, credit card transactions, visa and immigration records, telecom service providers and chemical vendors.&lt;/p&gt;
&lt;p&gt;Most of us reading this article appear in many these databases, which today are islands of information controlled by different government agencies. They cover different segments of the population and may overlap to some extent. Stitching together these disparate databases together would require a mammoth exercise to uniquely identify all Indian residents. That is precisely what Aadhaar, the missing link, is doing, say critics.&lt;/p&gt;
&lt;p&gt;"If Aadhaar ever succeeds in assigning a unique number to all residents, it will take a maximum of two years to create a common Natgrid database. Using a terminal in his office, a cop would be able to watch whatever you do - &amp;nbsp;travelling, talking, buying - &amp;nbsp;in real time. &amp;nbsp;The surveillance technology is pretty straightforward," says noted security expert and IIT Mumbai alumni, Dr Samir Kelekar of Teknotrends.&lt;/p&gt;
&lt;p&gt;The system is being designed to catch terrorists and criminals, say Natgrid supporters. "But why subject the entire population to potentially the same level of surveillance," asks Sunil Abraham of Centre for Internet and Society.&lt;/p&gt;
&lt;p&gt;Noted jurist Usha Ramanathan says since 2008 several measures such as the Collection of Statistics Act, The Information Technology Act, &amp;nbsp;Aadhaar, National Grid have come about to collect information about people. “After 9/11 in the guise of homeland security USA expanded police powers. Something similar is happening in India after 26/11,” she says.&lt;/p&gt;
&lt;p&gt;The claims of Aadhaar benefiting the poor is untested as there has been no feasibility study, she adds. "This is a security project masquerading as an anti-poverty project," says Abraham. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;Aadhaar has eluded a debate so far on these issues, say critics. Ramanathan says she made three attempts in November 2009, July 2010 and February 2011 to engage Nilekani, Aadhaar Director General R S Sharma and few other project officials on the issue.&lt;/p&gt;
&lt;h3&gt;Dubious demands&lt;/h3&gt;
&lt;p&gt;A New Delhi-based Aadhaar government official, speaking on the condition of anonymity, said there was no discussion within the project on the potential risks it posed. "The main focus is in making a paradigm shift in governance and reaching out to the poor to ensure that the Rs 3,26,000 crore being spent on subsidy is not pilfered," he said.&lt;/p&gt;
&lt;p&gt;But he went on to acknowledge that Aadhaar was like 'nuclear energy', which could be used to either make bombs or generate electricity. “It is for the media and civil society to apply pressure for the right safeguards," he said. &amp;nbsp;&lt;/p&gt;
&lt;p&gt;While the engineers and bureaucrats are steamrolling the project, the laws of the land and the promised safeguards are yet to catch up with it.&lt;/p&gt;
&lt;p&gt;Indian judiciary has also given a free hand to the law enforcement authorities to conduct surveillance. According to the latest Google Transparency Report, Indian government officials made 67 requests to remove contentious items from various Google services between July to December 2010. Only 6 requests were backed by court orders and rest were demands made by police and other executive agencies.&lt;/p&gt;
&lt;p&gt;Why is Nilekani who has emerged as the face of Aadhaar silent about the security dimension of the project, ask critics. After all, the Infosys credo is to ‘disclose when in doubt’, they point out. "Nilekani and team are good people without any evil intention. They have never lived in villages and believe that technology can solve any problem," says Abraham.&lt;/p&gt;
&lt;p&gt;Ramanathan differs. "In 2009, I would have said he was unaware of the possible risks of Aadhaar. I will not attribute that innocence to him anymore. People in power tend to be blinded by it," she says.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;"Their response has varied from ‘nobody else is asking these questions’, ‘have not come prepared to address these issues today’ and ‘we will get back to you’," she says. &amp;nbsp; &amp;nbsp;&lt;/p&gt;
&lt;p&gt;Critics also accuse Aadhaar officials of presenting a misleading picture. Enrollment started as a voluntary exercise, but is now being made mandatory to get LPG cylinders. "They were supposed to collect only basic details, but Aadhaar enrollment forms now ask for email ids and phone numbers," Ramanathan said.&lt;/p&gt;
&lt;div&gt;This news appeared in the Deccan Herald on 5 July 2011. The original post can be read &lt;a class="external-link" href="http://www.deccanherald.com/content/173274/aadhaars-moment-truth.html"&gt;here&lt;/a&gt;.&lt;/div&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/news/aadhaar-truth'&gt;https://cis-india.org/news/aadhaar-truth&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2011-07-05T07:16:58Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/electronic-frontier-foundation-jyoti-panday-june-1-2017-aadhaar-ushering-in-a-commercialized-era-of-surveillance-in-india">
    <title>Aadhaar: Ushering in a Commercialized Era of Surveillance in India</title>
    <link>https://cis-india.org/internet-governance/news/electronic-frontier-foundation-jyoti-panday-june-1-2017-aadhaar-ushering-in-a-commercialized-era-of-surveillance-in-india</link>
    <description>
        &lt;b&gt;Since last year, Indian citizens have been required to submit their photograph, iris and fingerprint scans in order to access legal entitlements, benefits, compensation, scholarships, and even nutrition programs. Submitting biometric information is needed for the rehabilitation of manual scavengers, the training and aid of disabled people, and anti-retroviral therapy for HIV/AIDS patients. Soon police in the Alwar district of Rajasthan will be able to register criminals, and track missing persons through an app that integrates biometric information with the Crime and Criminal Tracking Network Systems (CCTNS).&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Jyoti Panday was published by the &lt;a class="external-link" href="https://www.eff.org/deeplinks/2017/05/aadhaar-ushering-commercialized-era-surveillance-india"&gt;Electronic Frontier Foundation&lt;/a&gt; on June 1, 2017.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;These instances demonstrate how intrusive India’s controversial  national biometric identity scheme, better known as Aadhaar has grown.  Aadhaar is a 12-digit unique identity number (UID) issued by the  government after verifying a person’s biometric and demographic  information. As of April 2017, the Unique Identification Authority of  India (&lt;a href="https://uidai.gov.in/"&gt;UIDAI&lt;/a&gt;) has issued &lt;a href="http://www.financialexpress.com/opinion/why-centre-will-have-to-devise-a-comprehensive-aadhaar-bill-and-not-a-money-bill-to-address-challenges/680820/"&gt;1.14 billion&lt;/a&gt; UIDs covering nearly 87% of the population making Aadhaar, the largest  biometric database in the world. The government asserts that enrollment  reduces fraud in welfare schemes and brings greater social inclusion.  Welfare schemes that provide access to basic services for marginalized  and vulnerable groups are essential. However, unlike countries where  similar schemes have been implemented, invasive biometric collection is  being imposed as a condition for basic entitlements in India. The  privacy and surveillance risks associated with the scheme have caused  much dissension in India.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Identity and Privacy in India&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Initiated as an identity authentication tool, the critical problem  with Aadhaar is that it is being pushed as a unique identifier to access  a range of services. The government &lt;a href="http://www.dnaindia.com/india/report-alive-to-earlier-orders-that-aadhaar-should-be-voluntary-sc-2418854"&gt;continues to maintain&lt;/a&gt; that  the scheme is voluntary, and yet it has galvanized enrollment by  linking Aadhaar to over 50 schemes. Aadhaar has become the de-facto  identity document accepted at private, banks, schools, and hospitals.  Since Aadhaar is linked to the delivery of essential services,  authentication errors or deactivation &lt;a href="https://scroll.in/topic/38792/identity-project"&gt;has serious consequences&lt;/a&gt; including exclusion and denial of statutory rights. But more  importantly, using a unique identifier across a range of schemes and  services enables seamless combination and comparison of databases. By  using Aadhaar, &lt;a href="https://scroll.in/article/833080/aadhaar-amid-the-hullabaloo-about-privacy-the-more-pressing-issue-of-exclusion-has-been-forgotten"&gt;the government&lt;/a&gt; can  match existing records such as driving license, ration card, financial  history to the primary identifier to create detailed profiles. Aadhaar  may not be the only mechanism, but essentially, it's a surveillance tool  that the Indian government can use to surreptitiously identify and  track citizens.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is worrying, particularly in context of the ambiguity regarding  privacy in India. The right to privacy for Indian citizens is not  enshrined in the Constitution. Although, the Supreme Court &lt;a href="https://thewire.in/7398/sorry-mr-attorney-general-we-do-actually-have-a-constitutional-right-to-privacy/"&gt;has located&lt;/a&gt; the right to privacy as implicit in the concept of “ordered liberty”  and held that it is necessary in order for citizens to effectively enjoy  all other fundamental rights. There is also no comprehensive national  framework that regulates the collection and use of personal  information. In 2012, Justice K.S. Puttaswamy&lt;a href="http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841"&gt; challenged&lt;/a&gt; Aadhaar in the Supreme Court of India on the grounds that it violates  the right to privacy. The Court passed an interim order restricting  compulsory linking of Aadhaar for benefits delivery, and referred the  clarification on privacy as a right to a larger bench. More than a year  later, the constitutional bench &lt;a href="http://indianexpress.com/article/opinion/columns/supreme-test-4642608/"&gt;is yet to be&lt;/a&gt; constituted.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The delay in sorting out the nature and scope of privacy as right in  India has allowed the government to continue linking Aadhaar to as many  schemes as possible, perhaps with the intention of ensuring the scheme  becomes too big to be rolled back. In 2016, the government enacted the '&lt;a href="https://uidai.gov.in/images/the_aadhaar_act_2016.pdf"&gt;Aadhaar Act&lt;/a&gt;' passing the legislation without any debate, discussion or even approval of both houses of Parliament. In April this year, &lt;a href="http://www.hindustantimes.com/business-news/now-aadhaar-a-must-to-file-income-tax-returns-and-apply-for-pan-card/story-71CBEXGGD8yd9iFjUn4oNI.html"&gt;Aadhaar was made compulsory&lt;/a&gt; for filing income tax or PAN number application and the decision is being challenges in Supreme Court. &lt;a href="http://www.dnaindia.com/india/report-arguments-on-so-called-privacy-is-bogus-ag-rohtagi-defends-making-aadhaar-mandatory-for-pan-card-in-sc-2425525"&gt;Defending the State &lt;/a&gt;, the  Attorney-General of India claimed that the arguments on so-called  privacy and bodily intrusion is bogus, and citizens cannot have an  absolute right over their body! The State’s articulation is chilling,  especially in light of the &lt;a href="https://qz.com/463279/indias-dna-profiling-bill-may-become-one-of-the-worlds-most-intrusive-laws/"&gt;Human DNA Profiling Bill&lt;/a&gt; seeking  the right to collect biological samples and DNA indices of citizens.  Such anti-rights arguments are worth note because biometric tracking of  citizens isn't just government policy - it is also becoming big  business.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Role of Private Companies&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Private companies supply hardware, software, programs, and the  biometric registration services for rolling out Aadhaar to India’s large  population. UIDAI’s Committee on Biometrics acknowledges that  biometrics data are national assets though American biometric technology  provider L-1 Identity Solutions, and consulting firms Accenture and  Ernst and Young can &lt;a href="https://www.bloombergquint.com/technology/2017/05/03/who-has-your-aadhaar-data"&gt;access and retain&lt;/a&gt; citizens' data. The Aadhaar Act introduces electronic  Know-Your-Customer (eKYC) that allows government agencies and private  companies to download data such as name, gender and date of birth from  the Aadhaar database at the time of authentication. Banks and telecom  companies using authentication process to download data and auto-fill  KYC forms and to profile users. Over the last few years, the number of  companies or applications built around profiling of citizens’ personally  sensitive data has grown exponentially.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A number of people linked with creating the UIDAI infrastructure have  founded iSPIRT, an organisation that is pushing for commercial uses of  Aadhaar. Private companies are using Aadhaar for authentication purposes  and background checks. Microsoft has &lt;a href="http://gadgets.ndtv.com/apps/news/skype-lite-for-android-launched-what-it-is-how-it-works-and-everything-else-you-need-to-know-1662147"&gt;announced&lt;/a&gt; SkypeLite integration with Aadhaar to verify users. Others, such as &lt;a href="https://www.trustid.in/"&gt;TrustId &lt;/a&gt;and &lt;a href="http://timesofindia.indiatimes.com/city/delhi/eko-partners-npci-to-allow-aadhaar-linked-money-transfers/articleshow/53046280.cms"&gt;Eko&lt;/a&gt; are  integrating rating systems into their authentication services and  tracking users through platforms they create. In essence such companies  are creating their own private database to track authenticated Aadhaar  users and they may sell this data to other companies. The growth of  companies that &lt;a href="https://scroll.in/article/823274/how-private-companies-are-using-aadhaar-to-deliver-better-services-but-theres-a-catch"&gt;share and combine databases&lt;/a&gt; to profile users is an indication of the value of personal data and its  centrality for both large and small companies in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Integrating and linking large biometrics collections to each other,  which are then linked with traditional data points that private  companies hold such as geolocation or phone number enables constant  surveillance to take over. So far, there has been no parliamentary  discussion on the role of private companies. UIDAI remains the ultimate  authority in deciding the nature, level and cost of access granted to  private companies. For example, there is nothing in Aadhaar Act that  prevents Facebook from entering into an agreement with the Indian  government to make Aadhaar mandatory to access WhatsApp or any of its  other services. Facebook could also pay data brokers and aggregators to  create customer profiles to add to its ever growing data points for  tracking and profiling its users.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Security Risks and Liability&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;A series of data leakages have raised concerns about which private  entities are involved, and how they handle personal and sensitive data.  In February, UIDAI registered a complaint against three companies for  storing and using biometric data for multiple transactions. Aadhaar  numbers of over 130 million people and bank account details of about 100  million people&lt;a href="http://www.thehindubusinessline.com/info-tech/aadhaar-data-leak-exposes-cyber-security-flaws/article9677360.ece"&gt; have been publicly displayed&lt;/a&gt; through government portals owing to poor security practices. A &lt;a href="https://sabrangindia.in/sites/default/files/aadhaarfinancialinfo_02b_1.pdf?498"&gt;recent report&lt;/a&gt; from Centre for Internet and Society (CIS) showed that a &lt;a href="https://thewire.in/133916/taking-cognisance-of-the-deeply-flawed-system-that-is-aadhaar/"&gt;simple tweaking of URL query parameters&lt;/a&gt; of  the National Social Assistance Programme (NSAP) website could unmask  and display private information of a fifth of India's population.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Such data leaks pose a huge risk as compromised biometrics can never  be recovered. The Aadhaar Act establishes UIDAI as the primary custodian  of identity information, but &lt;a href="https://scroll.in/article/830589/under-the-right-to-information-law-aadhaar-data-breaches-will-remain-a-state-secret"&gt; is silent on the liability&lt;/a&gt; in  case of data breaches. The Act is also unclear about notice and  remedies for victims of identity theft and financial frauds and citizens  whose data has been compromised. UIDAI has continued to fix breaches  upon being notified, but maintains that storage in federated databases  ensures that no agency can track or profile individuals.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;After almost a decade of pushing a framework for mass collection of data, the Indian government has &lt;a href="http://www.dot.gov.in/sites/default/files/2017_05_26%20Circulation%20Letter%20for%20Security%20of%20Information.pdf"&gt;issued guidelines &lt;/a&gt; to  secure identity and sensitive personal data in India. The guidelines  could have come earlier, and given large data leaks in the past may also  be redundant. Nevertheless, it is reassuring to see practices for  keeping information safe and the idea of positive informed consent being  reinforced for government departments. To be clear, the guidelines are  meant for government departments and private companies using Aadhaar for  authentication, profiling and building databases fall outside its  scope. With political attitudes to corporations exploiting personal  information changing the world over, the stakes for establishing a  framework that limits private companies commercializing personal data  and tracking Indian citizens are as high as they have ever been.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/electronic-frontier-foundation-jyoti-panday-june-1-2017-aadhaar-ushering-in-a-commercialized-era-of-surveillance-in-india'&gt;https://cis-india.org/internet-governance/news/electronic-frontier-foundation-jyoti-panday-june-1-2017-aadhaar-ushering-in-a-commercialized-era-of-surveillance-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2017-06-07T12:45:30Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
