The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 41 to 55.
OWASP Seasides Conference
https://cis-india.org/internet-governance/news/owasp-seasides-conference
<b>Karan Saini attended the OWASP Seasides security conference held on February 27 and 28, 2019 at Cavelossim, Goa. The event was organized by OWASP Seasides.</b>
<p>For conference details <a class="external-link" href="https://www.owaspseasides.com/schedule/workshops">click here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/owasp-seasides-conference'>https://cis-india.org/internet-governance/news/owasp-seasides-conference</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-03-07T23:53:47ZNews ItemUnbox Festival 2019: CIS organizes two Workshops
https://cis-india.org/internet-governance/blog/unbox-2019-festival
<b>Centre for Internet & Society organized two workshops at the Unbox Festival 2019, in Bangalore, on 15 and 17 February 2019. </b>
<h3 style="text-align: justify; ">'What is your Feminist Infrastructure Wishlist?</h3>
<p style="text-align: justify; ">The first workshop 'What is your Feminist Infrastructure Wishlist?' was on Feminist Infrastructure Wishlists that was conducted by P.P. Sneha and Saumyaa Naidu on 15 February 2019. The objective of the workshop was to explore what it means to have infrastructure that is feminist. How do we build spaces, networks, and systems that are equal, inclusive, diverse, and accessible? We will also reflect on questions of network configurations, expertise, labour and visibility. For reading material <a class="external-link" href="https://feministinternet.org/">click here</a>.</p>
<h3 style="text-align: justify; ">AI for Good</h3>
<p style="text-align: justify; ">With a backdrop of AI for social good, we explore existing applications of artificial intelligence, how we interact and engage with this technology on a daily basis. A discussion led by Saumyaa Naidu and Shweta Mohandas invited participants to examine current narratives around AI and imagine how these may transform with time. Questions around how we can build an AI for the future will become the starting point to trace its implications relating to social impact, policy, gender, design, and privacy. For reading materials see <a class="external-link" href="https://ainowinstitute.org/AI_Now_2018_Report.pdf">AI Now Report 2018</a>, <a class="external-link" href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">Machine Bias</a>, and <a class="external-link" href="https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistants-have-feminine-names/475884/">Why Do So Many Digital Assistants Have Feminine Names?</a></p>
<p style="text-align: justify; ">For info on Unbox Festival, <a class="external-link" href="http://unboxfestival.com/">click here</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/unbox-2019-festival'>https://cis-india.org/internet-governance/blog/unbox-2019-festival</a>
</p>
No publishersaumyaaGenderInternet GovernanceArtificial Intelligence2019-02-26T01:53:39ZBlog EntryAI for Social Good Summit
https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit
<b>Arindrajit Basu was a speaker at the event co-organized by Google AI and United Nations ESCAP on December 13, 2018 in Bangkok, Thailand.</b>
<p class="moz-quote-pre" style="text-align: justify; ">Arindrajit spoke at the panel " How can governments use AI in Public Service Delivery" along with Malavika Jayaram, Jake Lucci,Punit Shukla,Simon Schmooly and Gal Oren. He presented CIS research on AI in agriculture in Karnataka-which will be published as part of a compendium documenting case studies worldwide soon.</p>
<p class="moz-quote-pre" style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/ai-for-social-good-summit">Click to read more</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit'>https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-25T01:02:01ZNews ItemFuture Tech and Future Law
https://cis-india.org/internet-governance/news/future-tech-and-future-law
<b>The Dept. of IT & BT, Government of Karnataka organised the 21st edition of Bengaluru Tech Summit from November 29, 2018 to December 1, 2018 at Palace Grounds, Bengaluru. Arindrajit Basu was a speaker at the panel on 'Future Tech and Future Law'.</b>
<p class="moz-quote-pre" style="text-align: justify; ">The discussion was moderated by Tanvi Ratna. Aayush's co-panelists were Apar Gupta,Jaideep Reddy and Nilesh Trivedi. During his remarks, he attempted to focus on our AI research thus far and our suggestions for AI regulation.</p>
<p class="moz-quote-pre" style="text-align: justify; ">For more details <a class="external-link" href="https://www.bengalurutechsummit.com/">see this page</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/future-tech-and-future-law'>https://cis-india.org/internet-governance/news/future-tech-and-future-law</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-01-03T01:17:29ZNews ItemAmazon launches Machine Learning-based platform for healthcare space
https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space
<b>Amazon’s Comprehend Medical platform uses a new HIPAA-eligible machine learning service to process unstructured medical text and information such as dosages, symptoms and signs, and patient diagnosis.</b>
<p style="text-align: justify; ">The article by Kul Bhushan was published in the <a class="external-link" href="https://www.hindustantimes.com/tech/nov-28-amazon-launches-machine-learning-driven-platform-for-healthcare-space/story-3EuXjDiVO8NLBxjOMKkopO.html">Hindustan Times</a> on November 28, 2018.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">With an objective to push deeper into the health space, Amazon has introduced a new <a href="https://www.hindustantimes.com/topic/machine-learning">Machine Learning</a> (ML) software to analyse medical records for better treatments of patients and reduce overall expenditure.</p>
<p style="text-align: justify; ">Unveiled at the company’s re:Invent cloud conference in Las Vegas, Amazon’s Comprehend Medical platform uses a new “HIPAA-eligible machine learning service that allows developers to process unstructured medical text and identify information such as patient diagnosis, treatments, dosages, symptoms and signs, and more.”</p>
<p style="text-align: justify; ">“Comprehend Medical helps health care providers, insurers, researchers, and clinical trial investigators as well as health care IT, biotech, and pharmaceutical companies to improve clinical decision support, streamline revenue cycle and clinical trials management, and better address data privacy and protected health information (PHI) requirements,” explains the company on its <a href="https://aws.amazon.com/blogs/machine-learning/introducing-medical-language-processing-with-amazon-comprehend-medical/" rel="nofollow">website</a>.</p>
<p style="text-align: justify; ">Amazon aims to mitigate the time spent on manually analysing medical data of a patient. The company hopes the software will ultimately empower users to make a more informed decision about their health and even things like scheduling care visits.</p>
<p style="text-align: justify; ">“Unlocking this information from medical language makes a variety of common medical use cases easier and cost-effective, including: clinical decision support (e.g., getting a historical snapshot of a patient’s medical history), revenue cycle management (e.g., simplifying the time-intensive manual process of data entry), clinical trial management (e.g., by identifying and recruiting patients with certain attributes into clinical trials), building population health platforms, and helping address (PHI) requirements (e.g., for privacy and security assurance.),” the company added.</p>
<p style="text-align: justify; ">Amazon also pointed out that some of the medical institutes such as Seattle’s Fred Hutchinson Cancer Research Center and Roche Diagnostics have already implemented the software.</p>
<p style="text-align: justify; ">Amazon’s expansion into the healthcare space comes after it acquired health-focused startup PillPack for $1 billion earlier this year. Apart from Amazon, other technology companies like Apple and Microsoft are investing into the healthcare space.</p>
<p style="text-align: justify; ">Apple is already offering HealthKit and CareKit platforms to develop apps focused on health. The company earlier this year launched <a href="https://www.hindustantimes.com/tech/apple-watch-series-4-launched-with-ecg-compatibility-new-design/story-2LqdNq7YjAXGU3HEH5om8N.html">Apple Watch Series 4 with ECG support</a>. Microsoft, however, has deeper footprints in the health segment. The company is building a bunch of Artificial Intelligence-based tools for healthcare.</p>
<p style="text-align: justify; ">For instance, Microsoft’s Project InnerEye uses machine learning technology to build tools for automatic, quantitative analysis of three-dimensional radiological images.</p>
<p style="text-align: justify; ">According to various reports, Artificial Intelligence is going to make a big impact in the healthcare industry. An Accenture report in 2017 <a href="https://www.accenture.com/t20171215T032059Z__w__/us-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf" rel="nofollow" target="_blank">predicted</a> that the AI apps can create $150 billion in annual savings for the United States alone.</p>
<p style="text-align: justify; ">Back in India, the adoption of AI in healthcare is growing. According to a report by the Centre for Internet and Society India, “the use of AI in healthcare in India is increasing with new startups and large ICT companies offering AI solutions for healthcare challenges in the country.”</p>
<p style="text-align: justify; ">Bengalure-based startup mfine has developed an AI-based healthcare platform which learns medical standards and protocols and diagnosis and treatment methods to further help the doctors with necessary data and analysis. The company earlier this year raised $4.2 million in funding.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space'>https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-03T00:23:06ZNews ItemSpeculative Futures Lab on Artificial Intelligence in Media, Entertainment, and Gaming
https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming
<b>Pranav Manjesh Bidare attended the event organised by Quicksand between November 16 and 18, 2018 in Bangalore as a panelist.</b>
<p style="text-align: justify; ">Pranav was a panelist in the session discussing "Ethics of AI in the Creative spaces" on November 17, alongside Urvashi Aneja, and Abishek Reddy from Tandem Research. For more info <a class="external-link" href="http://cis-india.org/internet-governance/files/Quicksand%20AI%20Futures%20Lab.pdf">see this</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming'>https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-05T03:12:58ZNews ItemParticipation in the meetings of ISO/IEC JTC 1/SC 27 'IT Security techniques'
https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques
<b>From 30 September 2018 to 4 October 2018, Gurshabad Grover participated in the meetings of the working groups of ISO/IEC JTC 1/SC 27 'IT Security techniques' held in Gjøvik, Norway. The meetings were organized by Standards Norway with support from NTNU, Microsoft, Telenor, et.al.</b>
<p>Gurshabad mainly focused on the meetings of Working Group 5 responsible for standards and research in "Identity management and privacy technologies" in SC 27. I attended sessions discussing work related to current ISO/IEC standards and upcoming work in the WG, such as:</p>
<ul>
<li>Establishing a PII deletion concept in organizations</li>
</ul>
<ul>
<li>Privacy guidelines for smart cities</li>
</ul>
<ul>
<li>Additional privacy-enhancing data de-identification standards</li>
</ul>
<ul>
<li>Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management</li>
</ul>
<ul>
<li>User-centric framework for PII handling based on user privacy preferences</li>
</ul>
<p><br />Gurshabad will be a co-rapporteur on a 12-month study period to investigate the 'Impact of Artificial Intelligence on Privacy' which was initiated by the WG in the meeting. Additionally, I was a part of the drafting committee which prepared the final resolutions and liaison statements from the meeting.</p>
<p style="text-align: justify; ">Gurshabad also attended the Norwegian Business Forum on cyber security which was held on October 4th, which featured talks by professionals and academicians working in cyber security in their different sectors. The agenda for the business forum can be <a class="external-link" href="http://www.standard.no/en/kurs-og-arrangementer/arrangement-standard-norge-og-nek/arrangement-fra-standard-norge/business-forum---cyber-security/">found here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques'>https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-10-31T01:28:29ZNews ItemTechnology Foresight Group Tandem Research's AI policy lab on the theme AI and Environment
https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment
<b>Shweta Mohandas attended a roundtable discussion on artificial intelligence and environment held at Tandem Research's office in Goa on October 5, 2018. She also made the framing intervention for the first session by addressing the question - What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?</b>
<dl style="text-align: justify; ">
<p>Conversations at the lab clustered around four main themes:</p>
<p><b>AI in the Anthropocene</b><br />What are the most critical sustainability challenges in India – and can AI be useful in addressing them? What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?<br /><br /><b>Conservation after nature</b><br />What AI interventions are possible to foster better conservation and can AI driven citizen science initiatives improve people’s relationship with the natural world? Can AI help imagine a more dynamic and proximate co-existence with other species, after nature?<br /><br /><b>Water ecosystems</b><br />Can AI help us imagine new paradigm of water control and infrastructure that are more dynamic and ‘mirror’ the complexity of natural water systems? Will AI lead to decentralization and empowerment of water users or will it result in centralized models and loss of power and agency of water users?<br /><br /><b>Future Cities</b><br />Can AI systems be used to foster sustainability practices around mobility, energy, waste, and help better plan development zones and create early warning systems? What systems can be built to encourage citizen participation for solving sustainability problems and increase transparency and accountability of municipal governments?</p>
</dl>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment'>https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-10-31T01:10:34ZNews ItemConfidentiality of Communications and Privacy of Data in the Digital Age
https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age
<b>On September 25, 2018, Elonnai Hickok participated in a side event Confidentiality of Communications and Privacy of Data in the Digital Age organized by INCLO and Privacy International at the Human Rights Council 39th ordinary session. Elonnai spoke on artificial intelligence and privacy.</b>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age'>https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age</a>
</p>
No publisherpraskrishnaInternet GovernanceArtificial IntelligencePrivacy2018-10-28T06:02:07ZNews ItemDiscrimination in the Age of Artificial Intelligence
https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence
<b>The dawn of Artificial Intelligence (AI) has been celebrated by both government and industry across the globe. AI offers the potential to augment many existing bureaucratic processes and improve human capacity, if implemented in accordance with principles of the rule of law and international human rights norms. Unfortunately, AI-powered solutions have often been implemented in ways that have resulted in the automation, rather than mitigation, of existing societal inequalities.</b>
<p>This was originally published by <a class="external-link" href="http://ohrh.law.ox.ac.uk/discrimination-in-the-age-of-artificial-intelligence/">Oxford Human Rights Hub</a> on October 23, 2018</p>
<hr />
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/ArtificialIntelligence.jpg/@@images/3b551d39-e419-442c-8c9d-7916a2d39378.jpeg" alt="Artificial Intelligence" class="image-inline" title="Artificial Intelligence" /></p>
<p style="text-align: justify; ">Image Credit: Sarla Catt via Flickr, used under a Creative Commons license available at https://creativecommons.org/licenses/by/2.0/</p>
<p style="text-align: justify; ">In the international human rights law context, AI solutions pose a threat to norms which prohibit discrimination. International Human Rights Law <a href="https://books.google.co.in/books/about/International_Human_Rights_Law.html?id=YkcXAgAAQBAJ&redir_esc=y">recognizes that discrimination</a> may take place in two possible ways, directly or indirectly. Direct discrimination occurs when an individual is treated less favourably than someone else similarly situated on one of the grounds prohibited in international law, which, as per the <a href="http://www.equalrightstrust.org/ertdocumentbank/Human%20Rights%20Committee,%20General%20Comment%2018.pdf">Human Rights Committee,</a> includes race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Indirect discrimination occurs when a policy, rule or requirement is ‘outwardly neutral’ but has a disproportionate impact on certain groups that are meant to be protected by one of the prohibited grounds of discrimination. A clear example of indirect discrimination recognized by the European Court of Human Rights arose in the case of <a href="http://www.errc.org/cikk.php?cikk=3559"><i>DH&Ors v Czech Republic</i></a>. The ECtHR struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children but which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, possibly because the tests were blind to cultural and linguistic differences. This case acts as a useful analogy for the potential disparate impacts of AI and should serve as useful precedent for future litigation against AI-driven solutions.</p>
<p style="text-align: justify; ">Indirect discrimination by AI may occur <a href="https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf">at two stages</a>. First is the <b>usage of incomplete or inaccurate training data</b> that results in the algorithm processing data that may not accurately reflect reality. Cathy O’Neil explains this <a href="https://weaponsofmathdestructionbook.com/">using a simple example</a>. There are two types of crimes-those that are ‘reported’ and others that are only ‘found’ if a policeman is patrolling the area. The first category includes serious crimes such as murder or rape while the second includes petty crimes such as vandalism or possession of illicit drugs in small quantities. Increased police surveillance in areas in US cities where Black or Hispanic people reside lead to more crimes being ‘found’ there. Thus, data is likely to suggest that these communities commit a higher proportion of crimes than they actually do – indirect discrimination that has been empirically been shown through research published by <a href="https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say">Pro Publica</a>.</p>
<p style="text-align: justify; ">Discrimination may also occur at the stage of <b>data processing</b>, which is done through a metaphorical <a href="https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/">‘black-box’</a> that accepts inputs and generates outputs without revealing to the human developer how the data was processed. This conundrum is compounded by the fact that the algorithms are often utilised to solve an amorphous problem-which attempts to break down a complex question into a simple answer. An example is the development of ‘risk profiles’ of individuals for the <a href="http://fortune.com/longform/ai-bias-problem/">determination of insurance premiums.</a> Data might show that an accident is more likely to take place in inner cities due to more densely packed populations in these areas. Racial and ethnic minorities tend to reside more in these areas, which means that algorithms could learn that minorities are more likely to get into accidents, thereby generating an outcome (‘risk profile’) that indirectly discriminates on grounds of race or ethnicity.</p>
<p style="text-align: justify; ">It would be wrong to ignore discrimination, both direct and indirect, that occurs as a result of human prejudice. The key difference between that and discrimination by AI lies in the ability of other individuals to compel the decision-maker to explain the factors that lead to the outcome in question and testing its validity against principles of human rights. The increasing amounts of discretion and, consequently, power being delegated to autonomous systems mean that principles of accountability which audit and check indirect discrimination need to be built into the design of these systems. In the absence of these principles, we risk surrendering core tenets of human rights law to the whims of an algorithmically crafted reality.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence'>https://cis-india.org/internet-governance/blog/oxford-human-rights-hub-arindrajit-basu-october-23-2018-discrimination-in-the-age-of-artificial-intelligence</a>
</p>
No publisherArindrajit BasuInternet GovernanceArtificial IntelligencePrivacy2018-10-26T14:47:57ZBlog EntryThe Srikrishna Committee Data Protection Bill and Artificial Intelligence in India
https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india
<b>Artificial Intelligence in many ways is in direct conflict with traditional data protection principles and requirements including consent, purpose limitation, data minimization, retention and deletion, accountability, and transparency.</b>
<h3 style="text-align: justify; ">Privacy Considerations in AI</h3>
<p style="text-align: justify; ">Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a></p>
<p style="text-align: justify; ">The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems<a href="#_ftn2" name="_ftnref2"><sup><sup>[2]</sup></sup></a> have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc. have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.<a href="#_ftn3" name="_ftnref3"><sup><sup>[3]</sup></sup></a></p>
<p style="text-align: justify; ">India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.</p>
<p style="text-align: justify; ">In this article we attempt to analyse if and how the Srikrishna committee draft Bill and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.</p>
<h3 style="text-align: justify; ">The GDPR and Artificial Intelligence</h3>
<p style="text-align: justify; ">The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.<a href="#_ftn4" name="_ftnref4"><sup><sup>[4]</sup></sup></a> Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:</p>
<ol style="text-align: justify; ">
<li><b>Data rights: </b> The GDPR enables individuals with a number of data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling. The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them. There are three exceptions to this right - if the automated decision making is: a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.<a href="#_ftn5" name="_ftnref5"><sup><sup>[5]</sup></sup></a> </li>
<li><b>Transparency:</b> Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.<a href="#_ftn6" name="_ftnref6"><sup><sup>[6]</sup></sup></a> Importantly, this requirement has the potential of ensuring that companies do not operate complete ‘black box’ algorithms within their business processes.</li>
<li><b>Fairness: </b>The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and ensuring that processing that does not result in negative discriminatory results.<a href="#_ftn7" name="_ftnref7"><sup><sup>[7]</sup></sup></a> </li>
<li><b>Purpose Limitation:</b> The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes. Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.<a href="#_ftn8" name="_ftnref8"><sup><sup>[8]</sup></sup></a> </li>
<li><b>Privacy by Design and Default:</b> Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.<a href="#_ftn9" name="_ftnref9"><sup><sup>[9]</sup></sup></a></li>
<li><b>Data Protection Impact Assessments:</b> Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.<a href="#_ftn10" name="_ftnref10"><sup><sup>[10]</sup></sup></a></li>
<li><b>Security:</b> Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization. </li>
</ol>
<h3 style="text-align: justify; ">Srikrishna Committee Bill and AI</h3>
<p style="text-align: justify; ">The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:</p>
<ol style="text-align: justify; ">
<li><b>Definition of Harm:</b> The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.</li>
</ol>
<p style="text-align: justify; ">Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Rights:</b> Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,<a href="#_ftn11" name="_ftnref11"><sup><sup>[11]</sup></sup></a> the right to opt out of automated decision making<a href="#_ftn12" name="_ftnref12"><sup><sup>[12]</sup></sup></a>, and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.<a href="#_ftn13" name="_ftnref13"><sup><sup>[13]</sup></sup></a> As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Quality</b>: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.<a href="#_ftn14" name="_ftnref14"><sup><sup>[14]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Principle of Privacy by Design:</b> Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">A number of these (a, d, e, and g) require that the interest of the data principal is accounted for throughout the processing of personal data, This will be significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.<a href="#_ftn15" name="_ftnref15"><sup><sup>[15]</sup></sup></a> If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Protection Impact Assessment:</b> Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely. This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.<a href="#_ftn16" name="_ftnref16"><sup><sup>[16]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Classification of data fiduciaries as significant data fiduciaries</b>: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.<a href="#_ftn17" name="_ftnref17"><sup><sup>[17]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Restrictions on cross border transfer of personal data: </b>Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">It is interesting to note that in the context of cross border sharing of data, the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.<a href="#_ftn18" name="_ftnref18"><sup><sup>[18]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Powers and functions of the Authority</b>: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.<a href="#_ftn19" name="_ftnref19"><sup><sup>[19]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Fair and reasonable processing: </b>Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.<a href="#_ftn20" name="_ftnref20"><sup><sup>[20]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Purpose Limitation: </b>Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,<a href="#_ftn21" name="_ftnref21"><sup><sup>[21]</sup></sup></a> and instead creates an exception for research, archiving, or statistical purposes.<a href="#_ftn22" name="_ftnref22"><sup><sup>[22]</sup></sup></a> The DPA has the responsibility of developing codes defining research purposes under the act.<a href="#_ftn23" name="_ftnref23"><sup><sup>[23]</sup></sup></a></p>
<ol style="text-align: justify; ">
<li><b>Security Safeguards:</b> Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.<a href="#_ftn24" name="_ftnref24"><sup><sup>[24]</sup></sup></a></li>
</ol>
<p style="text-align: justify; ">Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna uses Bill uses term de-identification. The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. <a href="#_ftn25" name="_ftnref25"><sup><sup>[25]</sup></sup></a></p>
<h3 style="text-align: justify; ">Technical perspectives of Privacy and AI</h3>
<p style="text-align: justify; ">There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws<a href="#_ftn26" name="_ftnref26"><sup><sup>[26]</sup></sup></a>, as well as research into AI privacy enhancing technologies.<a href="#_ftn27" name="_ftnref27"><sup><sup>[27]</sup></sup></a> Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.<a href="#_ftn28" name="_ftnref28"><sup><sup>[28]</sup></sup></a> . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority<a href="#_ftn29" name="_ftnref29"><sup><sup>[29]</sup></sup></a> break such methods into three categories:</p>
<ol style="text-align: justify; ">
<li>Techniques for reducing the need for large amounts of training data: Such techniques can include</li>
<ol>
<li><b>Generative adversarial networks (GANs):</b> GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.</li>
<li><b>Federated Learning:</b> Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the centralized model.</li>
<li><b>Matrix Capsules</b>: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.<a href="#_ftn30" name="_ftnref30"><sup><sup>[30]</sup></sup></a></li>
</ol>
<li>Techniques that uphold data protection without reducing the basic data set</li>
<ol>
<li><b>Differential Privacy</b>: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.</li>
<li><b>Homomorphic Encryption:</b> Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes</li>
<li><b>Transfer Learning</b>: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed. </li>
<li><b>RAIRD</b>: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.<a href="#_ftn31" name="_ftnref31"><sup><sup>[31]</sup></sup></a></li>
</ol>
<li>Techniques to move beyond opaque algorithms</li>
<ol>
<li><b>Explainable AI (XAI): </b>DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.<a href="#_ftn32" name="_ftnref32"><sup><sup>[32]</sup></sup></a> DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.<a href="#_ftn33" name="_ftnref33"><sup><sup>[33]</sup></sup></a></li>
<li><b>Local Interpretable Model Agnostic Explanations</b>: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.<a href="#_ftn34" name="_ftnref34"><sup><sup>[34]</sup></sup></a></li>
</ol> </ol>
<h3 style="text-align: justify; ">Public Sector use of AI and Privacy</h3>
<p style="text-align: justify; ">The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.</p>
<p style="text-align: justify; ">Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of <i>AK Gopalan</i> there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through <i>RC Cooper</i><a href="#_ftn35" name="_ftnref35"><sup><sup>[35]</sup></sup></a> and <i>Maneka</i><a href="#_ftn36" name="_ftnref36"><sup><sup>[36]</sup></sup></a>, the Supreme Court took steps to reverse the majority opinion in <i>Gopalan</i> and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in <i>Kharak Singh</i> where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in <i>Kharak Singh</i> draws heavily from<i> Munn</i> v. <i>Illinois</i><a href="#_ftn37" name="_ftnref37"><sup><sup>[37]</sup></sup></a> which held life to be “more than mere animal existence.” Curiously, after taking this position <i>Kharak Singh</i> fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in <i>Kharak Singh</i> was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. <i>Maneka</i> which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.</p>
<p style="text-align: justify; ">The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.</p>
<p style="text-align: justify; ">While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.</p>
<h3 style="text-align: justify; ">Conclusion</h3>
<p style="text-align: justify; ">The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.</p>
<p style="text-align: justify; ">Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.</p>
<p style="text-align: justify; ">One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; "><a href="#_ftnref1" name="_ftn1"><sup><sup>[1]</sup></sup></a> https://privacyinternational.org/topics/artificial-intelligence</p>
<p style="text-align: justify; "><a href="#_ftnref2" name="_ftn2"><sup><sup>[2]</sup></sup></a> https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/</p>
<p style="text-align: justify; "><a href="#_ftnref3" name="_ftn3"><sup><sup>[3]</sup></sup></a> https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/</p>
<p style="text-align: justify; "><a href="#_ftnref4" name="_ftn4"><sup><sup>[4]</sup></sup></a> https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref5" name="_ftn5"><sup><sup>[5]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref6" name="_ftn6"><sup><sup>[6]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref7" name="_ftn7"><sup><sup>[7]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref8" name="_ftn8"><sup><sup>[8]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref9" name="_ftn9"><sup><sup>[9]</sup></sup></a> https://gdpr-info.eu/art-25-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref10" name="_ftn10"><sup><sup>[10]</sup></sup></a> https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/</p>
<p style="text-align: justify; "><a href="#_ftnref11" name="_ftn11"><sup><sup>[11]</sup></sup></a> https://gdpr-info.eu/art-21-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref12" name="_ftn12"><sup><sup>[12]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref13" name="_ftn13"><sup><sup>[13]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref14" name="_ftn14"><sup><sup>[14]</sup></sup></a>Draft Data Protection Bill 2018 - Chapter II section 9</p>
<p style="text-align: justify; "><a href="#_ftnref15" name="_ftn15"><sup><sup>[15]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 29</p>
<p style="text-align: justify; "><a href="#_ftnref16" name="_ftn16"><sup><sup>[16]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 33</p>
<p style="text-align: justify; "><a href="#_ftnref17" name="_ftn17"><sup><sup>[17]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 38</p>
<p style="text-align: justify; "><a href="#_ftnref18" name="_ftn18"><sup><sup>[18]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VIII section 40</p>
<p style="text-align: justify; "><a href="#_ftnref19" name="_ftn19"><sup><sup>[19]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter X section 60</p>
<p style="text-align: justify; "><a href="#_ftnref20" name="_ftn20"><sup><sup>[20]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 4</p>
<p style="text-align: justify; "><a href="#_ftnref21" name="_ftn21"><sup><sup>[21]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 5</p>
<p style="text-align: justify; "><a href="#_ftnref22" name="_ftn22"><sup><sup>[22]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter IX Section 45</p>
<p style="text-align: justify; "><a href="#_ftnref23" name="_ftn23"><sup><sup>[23]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter XIV section 97</p>
<p style="text-align: justify; "><a href="#_ftnref24" name="_ftn24"><sup><sup>[24]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 31</p>
<p style="text-align: justify; "><a href="#_ftnref25" name="_ftn25"><sup><sup>[25]</sup></sup></a> Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref26" name="_ftn26"><sup><sup>[26]</sup></sup></a> https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref27" name="_ftn27"><sup><sup>[27]</sup></sup></a> https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref28" name="_ftn28"><sup><sup>[28]</sup></sup></a> https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref29" name="_ftn29"><sup><sup>[29]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref30" name="_ftn30"><sup><sup>[30]</sup></sup></a> https://www.artificial-intelligence.blog/news/capsule-networks</p>
<p style="text-align: justify; "><a href="#_ftnref31" name="_ftn31"><sup><sup>[31]</sup></sup></a> http://raird.no/about/factsheet.html</p>
<p style="text-align: justify; "><a href="#_ftnref32" name="_ftn32"><sup><sup>[32]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref33" name="_ftn33"><sup><sup>[33]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref34" name="_ftn34"><sup><sup>[34]</sup></sup></a> https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime</p>
<p style="text-align: justify; "><a href="#_ftnref35" name="_ftn35"><sup><sup>[35]</sup></sup></a> <i>R C Cooper</i> v. <i>Union of India</i>, 1970 SCR (3) 530.</p>
<p style="text-align: justify; "><a href="#_ftnref36" name="_ftn36"><sup><sup>[36]</sup></sup></a> <i>Maneka Gandhi</i> v. <i>Union of India</i>, 1978 SCR (2) 621.</p>
<p style="text-align: justify; "><a href="#_ftnref37" name="_ftn37"><sup><sup>[37]</sup></sup></a> 94 US 113 (1877).</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india'>https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india</a>
</p>
No publisherAmber Sinha and Elonnai HickokInternet GovernanceArtificial IntelligencePrivacy2018-09-03T13:29:12ZBlog EntryUNESCAP Google AI Meeting
https://cis-india.org/internet-governance/news/unescap-google-ai-meeting
<b>Arindrajit was a panelist at the event on AI in public service delivery hosted by UNESCAP Bangkok on August 29, 2018. The event was co-organized by Economic and Social Commission for Asia and the Pacific and Google.</b>
<p style="text-align: justify; ">The discussion centered around the two questions (1) Is AI different from other technological advancements in the past and (2) Recommendations for policy-makers to enhance AI in Public Service Delivery.The other panelists were Dr. Urs Gasser (Berkman), Vidushi Marda ( Art.19), Malavika Jayaram (Digital Asia Hub) and Jake Lucchi ( Google) The panel was a platform to discuss some of our findings in our case studies on healthcare and agriculture, which we will receive comments on and will get published in November.<br /><br /></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/unescap-google-ai-meeting'>https://cis-india.org/internet-governance/news/unescap-google-ai-meeting</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-09-20T15:47:42ZNews ItemUNDP joins Tech Giants in Partnership on AI
https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai
<b>UNDP joins the Partnership on Artificial Intelligence (AI), a consortium of companies, academics, and NGOs working to ensure that AI is developed in a safe, ethical, and transparent manner. Founded in 2016 by the tech giants - Amazon, DeepMind/Google, Facebook, IBM, and Microsoft - It has since been joined by industry leaders such as Accenture, Intel, Oxford Internet Institute - University of Oxford, eBay, as well as non profit organizations such as UNICEF and Human Rights Watch and many more.</b>
<p style="text-align: justify; ">This was published by <a class="external-link" href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/undp-joins-tech-giants-in-partnership-on-ai.html">UNDP</a> on its website on August 1, 2018.</p>
<hr />
<p style="text-align: justify; ">Through the partnership, UNDP’s Innovation Facility will work with partners and communities to responsibly test and scale the use of AI to achieve the Sustainable Development Goals. By harnessing the power of data, we can inform risk, policy and program evaluation, we also can utilize robotics and Internet of Things (IoT) to collect data and reach the previously deemed unreachable - to leave no one behind.</p>
<p style="text-align: justify; ">UNDP’s AI portfolio is growing rapidly. Drones and remote sensing are used to improve data collection and inform decisions: in the Maldives for disaster preparedness, and in Uganda to engage refugee and host communities in jointly developing infrastructures. We partnered with IBM to automate <a href="http://www.undp.org/content/undp/en/home/blog/2018/ai-and-the-future-of-our-work.html">UNDP’s Rapid Integrated Assessment</a>, aligning national development plans and sectoral strategies with the 169 Sustainable Development Goals’ targets; and with the UNEP, UNDP has launched the <a href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/un-biodiversity-lab-launched-to-revolutionize-biodiversity-plann.html">UN Biodiversity Lab</a>, powered by MapX. The spatial data platform will help countries support conservation efforts and accelerate delivery of the 2030 Agenda.</p>
<p style="text-align: justify; ">In line with UNDP’s Strategic Plan 2018-2021, innovation plays a central role in fulfilling the organization’s mission and achieving the Sustainable Development Goals. Benjamin Kumpf, UNDP’s Innovation Facility Lead states, “advances in robotics and AI have the potential to radically redefine human development pathways. The path to such redefinitions entails concrete AI experiments to increase the effectiveness of our work as well as norm-setting: we have to think beyond guidelines for ethical AI to designing accountability frameworks.”</p>
<p style="text-align: justify; ">The Partnership on AI aims to advance public understanding of AI, formulate best practices, and serve as an open platform for discussion and engagement about AI and its influences on people and society.</p>
<p style="text-align: justify; "><b>Full list of partners</b></p>
<p style="text-align: justify; ">Amazon, Apple, Deepmind, Facebook, Google, IBM, Microsoft, Aaai, ACLU, Accenture, Affectiva, Ai Forum New Zealand, Ai Now Institute, The Allen Institute For Artificial Intelligence (Ai2), Amnesty International, Article 19, Association For Computing Machinery, Center For Democracy & Technology (Cdt), Center For Human-compatible Artificial Intelligence, Center For Information Technology Policy Princeton University, Centre For Internet And Society, India (Cis), Leverhulme Centre For The Future of Intelligence (Cfi), Cogitai, Data & Society Research Institute, Digital Asia Hub, Doteveryone, Ebay, Element Ai, Electronic Frontier Foundation (Eff), Fraunhofer Iao, The Future of Humanity, Future of Life Institute, The Future of Privacy Forum, The Hastings Center, Hong Kong University of Science And Technology Department Of Electronic & Computer Engineering, Human Rights Watch, Intel, Markkula Center For Applied Ethics Santa Clara University, Mckinsey & Company, Nvidia, Omidyar Network Openai, Oxford Internet Institute - University of Oxford, Salesforce, SAP, Sony, Tufts University Hri Lab, UCL Engineering, UNDP, UNICEF, University of Washington Tech Policy Lab, Upturn, Xprize, Zalando</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai'>https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-13T15:51:48ZNews ItemEthical Data Design Practices in the AI (Artificial Intelligence) Age
https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age
<b>Shweta Mohandas was a panelist at discussion on Ethical Data Design Practices in the AI (Artificial Intelligence) Age, organised by Startup Grind, Bangalore on July 28, 2018 at NUMA Bangalore. </b>
<h2>Agenda</h2>
<p><b>Ethical Data Design Practices in the Age</b></p>
<p dir="ltr" style="text-align: justify; ">The panel discussion is intended to explore the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.</p>
<p dir="ltr">Discussion centred around how to:</p>
<ul>
<li>Understand current thinking by the AI community on ethics and morality in computing and the challenges it presents. </li>
<li>Explore examples of the ethical choices that products make now and will make in the near future.</li>
<li>Learn how designers might approach designing experiences that face moral dilemmas.</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age'>https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-08-01T23:14:21ZNews ItemThe rise of AI in Indian healthcare industry: An innovative asset to the rescue
https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry
<b>The use of Artificial Intelligence (AI) is rapidly increasing with the growth of start-ups and large Information and Communications Technology (ICT) companies that offer AI healthcare solutions for healthcare challenges in India.</b>
<p class="clearfix" style="text-align: justify; ">The blog post was published by <a class="external-link" href="https://mediaindia.eu/digital/the-rise-of-ai-in-indian-healthcare-industry/">Media India Group</a> on June 27, 2018. CIS research was quoted.</p>
<hr />
<p class="clearfix" style="text-align: justify; ">There is an uneven ratio of skilled doctors to patients in our country. According to the Indian Journal of Public Health (2017 edition), India had 4.8 practicing doctors per 10,000 population. It is expected to grow to 6.9 per 10,000 people by the year 2030, but the minimum doctor to patient ratio recommended by the World Health Organisation (WHO) is 1:1000. AI is an effective measure to tackle challenges like the uneven ratio, making doctors more skilled at their jobs, catering to rural areas for a high-quality healthcare, training doctors and nurses to tackle complex procedures.</p>
<p class="clearfix" style="text-align: justify; "><b>How does AI in healthcare function?</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector is a range of technologies that enable machines to sense, comprehend, act and learn so that they can carry out administrative and healthcare functions, be used in research and for training purposes. Some of the technologies included in the healthcare sector are natural language processing, intelligent agents, computer vision, machine learning, chatbots, voice recognition etc. These technologies can be adopted at varying levels across the healthcare ecosystem. Machine learning can be used to merge an individual’s omic (genomic, proteomic, metabolic) data with other data sources to predict the probability of developing a disease, which can then be addressed through timely intercessions such as preventative therapy.</p>
<p class="clearfix" style="text-align: justify; "><b>AI in the healthcare sector in India</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector in India is potentially developing. According to a report by the CIS India published earlier this year, AI could help add USD 957 billion to the Indian economy by 2035. Of the USD 5.5 billion that was raised by global digital healthcare companies in July-September 2017 quarter, at least 16 Indian Healthcare IT companies received funding, the report said. State governments are also providing support to AI start-ups.</p>
<p class="clearfix" style="text-align: justify; ">AI is capable of solving various healthcare challenges in India. The technological innovation is proving to be beneficial in diagnosis procedure, monitoring of chronic conditions, assisting in robotic surgery, drug discovery etc. Among several companies that are exploring various uses of AI in the healthcare segment, Microsoft is taking a major initiative along with Apollo and other hospitals to expand its use in several segments like cardiology, eye-care, diseases like Tuberculosis, HIV etc.</p>
<p class="clearfix" style="text-align: justify; ">Healthcare start-ups are majorly engaging themselves in the use of Artificial Intelligence.</p>
<p class="clearfix" style="text-align: justify; ">A list of six healthcare start-ups that are using Artificial Intelligence in India:</p>
<ol style="text-align: justify; ">
<li>Niramai, a Bengaluru-based start-up founded in the year 2016, is using AI for pain-free breast cancer screening.</li>
<li>MUrgency, a Mumbai-based healthcare mobile application is helping people connect in need of medical emergency responses with qualified medical, safety, rescue and assistance professionals.</li>
<li>Advancells, a Noida-based start-up provides stem cell therapy, also known as regenerative therapy, has a large potential in the field of organ transplantation.</li>
<li>Portea, a Bengaluru-based start-up offers home visits from doctors, nurses, physiotherapists and technicians for patients. Patients who are unable to visit hospitals can receive assistance from doctors and medical professionals using remote diagnostics and monitoring equipments, point-of-care devices.</li>
<li>AddressHealth, a Bengaluru-based start-up provides primary pediatric healthcare services to school children where they are screened for hearing, vision, dental health, anthropometry, alongside a medical competition.</li>
<li>LiveHealth, a Pune-based start-up works as a management information system (MIS) for healthcare providers. It collects samples, manages patient records, diagnoses them and generates reports.</li>
</ol>
<p class="clearfix" style="text-align: justify; ">Artificial Intelligence, the next-gen innovative thing will act as an “invisible hand” in revolutionising the healthcare sector and is expected to grow in India to USD 372 billion by 2022.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry'>https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-06T02:40:50ZNews Item