The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 1 to 15.
What is the problem with ‘Ethical AI’? An Indian Perspective
https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective
<b>On 22 May 2019, the OECD member countries adopted the OECD Council Recommendation on Artificial Intelligence. The Principles, meant to provide an “ethical framework” for governing Artificial Intelligence (AI), were the first set of guidelines signed by multiple governments, including non-OECD members: Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania. </b>
<p style="text-align: justify; ">The article by Arindrajit Basu and Pranav M.B. was <a class="external-link" href="https://cyberbrics.info/what-is-the-problem-with-ethical-ai-an-indian-perspective/">published by cyberBRICS</a> on July 17, 2019.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">This was followed by the <a href="https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf" rel="noreferrer noopener" target="_blank">G20 adopted human-centred AI Principles</a> on June 9th. These are the latest in a slew of (<a href="https://clinic.cyber.harvard.edu/2019/06/07/introducing-the-principled-artificial-intelligence-project/" rel="noreferrer noopener" target="_blank">at least 32!</a>) public, and private ‘Ethical AI’ initiatives that seek to use ethics to guide the development, deployment and use of AI in a variety of use cases. They were conceived as a response to a range of concerns around algorithmic decision-making, including discrimination, privacy, and transparency in the decision-making process.</p>
<p style="text-align: justify; ">In India, a noteworthy recent document that attempts to address these concerns is the <a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank">National Strategy for Artificial Intelligence</a> published by the National Institution for Transforming India, also called <em>NITI Aayog</em>, in June 2018. As the NITI Aayog Discussion paper acknowledges, India is the fastest growing economy with the second largest population in the world and has a significant stake in understanding and taking advantage of the AI revolution. For these reasons the goal pursued by the strategy is to establish the National Program on AI, with a view to guiding the research and development in new and emerging technologies, while addressing questions on ethics, privacy and security.</p>
<p style="text-align: justify; ">While such initiatives and policy measures are critical to promulgating discourse and focussing awareness on the broad socio-economic impacts of AI, we fear that they are dangerously conflating tenets of existing legal principles and frameworks, such as human rights and constitutional law, with ethical principles – thereby diluting the scope of the former. While we agree that ethics and law can co-exist, ‘Ethical AI’ principles are often drafted in a manner that posits as voluntary positive obligations various actors have taken upon themselves as opposed to legal codes they necessarily have to comply with.</p>
<p style="text-align: justify; ">To have optimal impact, ‘Ethical AI’ should serve as a decision-making framework only in specific instances when human rights and constitutional law do not provide a ready and available answer.</p>
<h3 style="text-align: justify; ">Vague and unactionable</h3>
<p style="text-align: justify; ">Conceptually, ‘Ethical AI’ is a vague set of principles that are often difficult to define objectively. In this perspective, academics like Brett Mittelstadt of the Oxford Internet Institute <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293" rel="noreferrer noopener" target="_blank">argues</a> that unlike in the field of medicine – where ethics has been used to design a professional code, ethics in AI suffers from four core flaws. First, developers lack a common aim or fiduciary duty to a consumer, which in the case of medicine is the health and well-being of the patient. Their primary duty lies to the company or institution that pays their bills, which often prevents them from realizing the extent of the moral obligation they owe to the consumer.</p>
<p style="text-align: justify; ">The second is a lack of professional history which can help clarify the contours of well-defined norms of ‘good behaviour.’ In medicine, ethical principles can be applied to specific contexts by considering what similarly placed medical practitioners did in analogous past scenarios. Given the relative nascent emergence of AI solutions, similar professional codes are yet to develop.</p>
<p style="text-align: justify; ">Third is the absence of workable methods or sustained discourse on how these principles may be translated into practice. Fourth, and we believe most importantly, in addition to ethical codes, medicine is governed by a robust and stringent legal framework and strict legal and accountability mechanisms, which are absent in the case of ‘Ethical AI’. This absence gives both developers and policy-makers large room for manoeuvre.</p>
<p style="text-align: justify; ">However, such focus on ethics may be a means of avoiding government regulation and the arm of the law. Indeed, due to its inherent flexibility and non-binding nature, ethics can be exploited as a piecemeal red herring solution to the problems posed by AI. Controllers of AI development are often profit-driven private entities, that gain reputational mileage by using the opportunity to extensively deliberate on broad ethical notions.</p>
<p style="text-align: justify; ">Under the guise of meaningful ‘self-regulation’, several organisations publish internal ‘Ethical AI’ guidelines and principles, and <a href="https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics">fund ethics research</a> across the globe. In doing so, they occlude the shackles of binding obligation and deflect from attempts at tangible regulation.</p>
<h3 style="text-align: justify; ">Comparing Law to Ethics</h3>
<p style="text-align: justify; ">This is in contrast to the well-defined jurisprudence that human rights and constitutional law offer, which should serve as the edifice of data-driven decision making in any context.</p>
<p style="text-align: justify; ">In the table below, we try to explain this point by looking at how three core fundamental rights enshrined both in our constitution and human rights instruments across the globe-right to privacy, right to equality/right against discrimination and due process-find themselves captured in three different sets of ‘Ethical AI frameworks.’ One of these inter-governmental <a href="https://www.oecd.org/going-digital/ai/principles/" rel="noreferrer noopener" target="_blank">(OECD)</a>, one devised by a private sector actor (‘<a href="https://ai.google/principles/" rel="noreferrer noopener" target="_blank">Google AI</a>’) and one by our very own, <a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank">NITI AAYOG.</a></p>
<p style="text-align: justify; "><img src="https://cyberbrics.info/wp-content/uploads/2019/07/image.png" /></p>
<p style="text-align: justify; ">With the exception of certain principles,most ‘Ethical AI’ principles are loosely worded as ‘‘seek to avoid’, ‘give opportunity for’, or ‘encourage’. A notable exception is the NITI AAYOG’s approach to protecting privacy in the context of AI. The document explicitly recommends the establishment of a national data protection framework for data protection, sectoral regulations that apply to specific contexts with the consideration of international standards such as GDPR as benchmarks. However, it fails to reference available constitutional standards when it discusses bias or explainability.</p>
<p style="text-align: justify; ">Several similar legal rules that have been enshrined in legal provisions -outlined and elucidated through years of case law and academic discourse – can be utilised to underscore and guide AI principles. However, existing AI principles do not adequately articulate how the legal rule can actually be applied to various scenarios by multiple organisations.</p>
<p style="text-align: justify; ">We do not need a new “Law of Artificial Intelligence” to regulate this space. Judge Frank Easterbrook’s famous 1996 proclamation on the <a href="https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2147&context=journal_articles">‘Law of the Horse’</a> through which he opposed the creation of a niche field of ‘cyberspace law’ comes to mind. He argued that a multitude of legal rules deal with ‘horses’, including the sale of horses, individuals kicked by horses, and with the licensing and racing of horses. Like with cyberspace, any attempt to arrive at a corpus of specialised ‘law of the horse’ would be shallow and ineffective.</p>
<p style="text-align: justify; ">Instead of fidgeting around for the next shiny regulatory tool, industry, practitioners, civil society and policy makers need to get back to the drawing board and think about applying the rich corpus of existing jurisprudence to AI governance.</p>
<h3 style="text-align: justify; ">What is the role for ‘Ethical AI?’</h3>
<p style="text-align: justify; ">What role can ‘ethical AI’ then play in forging robust and equitable governance of Artificial Intelligence? As it does in all other societal avenues, ‘ethical AI’ should serve as a framework for making legitimate algorithmic decisions in instances where law might not have an answer. An example of such a scenario is the <a href="https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/" rel="noreferrer noopener" target="_blank">Project Maven saga</a> – where 3,000 Google employees signed a petition opposing Google’s involvement with a US Department of Defense project by claiming that Google should not be involved in “the business of war.” There is no law-international or domestic that suggests that Project Maven-which was designed to study battlefield imagery using AI, was illegal. However, the debate at Google proceeded on ethical grounds and on the application of the ‘Ethical AI’ principles to this present context.</p>
<p style="text-align: justify; ">We realise the importance of social norms and mores in carving out any regulatory space. We also appreciate the role of ethics in framing these norms for responsible behaviour. However, discourse across civil society, academic, industry and government circles all across the globe needs to bring law back into the discussion as a framing device. Not doing so risks diluting the debate and potential progress to a set of broad, unactionable principles that can easily be manipulated for private gain at the cost of public welfare.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective'>https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective</a>
</p>
No publisherArindrajit Basu and Pranav M.B.Internet GovernanceArtificial Intelligence2019-07-21T14:57:08ZBlog EntryWe need a better AI vision
https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision
<b>Artificial intelligence conjures up a wondrous world of autonomous processes but dystopia is inevitable unless rights and privacy are protected.</b>
<p style="text-align: justify; ">The blog post by Arindrajit Basu was published by<a class="external-link" href="https://fountainink.in/essay/we-need-a-better-ai-vision-"> Fountainink</a> on October 12, 2019.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">he dawn of Artificial Intelligence (AI) has policy-makers across the globe excited. In India, it is seen as a tool to overleap structural hurdles and better understand a range of organisational and management processes while improving the implementation of several government tasks. Notwithstanding the apparent enthusiasm in the government and private sectors, an adequate technological, infrastructural, and financial capacity to develop these models at scale is still in the works.</p>
<p style="text-align: justify; ">A number of policy documents with direct or indirect references to India’s AI future—to be powered by vast troves of data—have been released in the past year and a half. These include the National Strategy for Artificial Intelligence (which I will refer to as National Strategy) authored by NITI Aayog, the AI Taskforce Report, Chapter 4 of the Economic Survey, the Draft e-Commerce Bill and the Srikrishna Committee Report.</p>
<p style="text-align: justify; ">While they extol the virtues of data-driven analytics, references to the preservation or augmentation of India’s constitutional ethos through AI has been limited though it is crucial for safeguarding the rights and liberties of citizens while paving the way for the alleviation of societal oppression.</p>
<p style="text-align: justify; ">In this essay, I outline the variety of AI use cases that are in the works. I then highlight India’s AI vision by culling the relevant aspects of policy instruments that impact the AI ecosystem and identify lacunae that can be rectified. Finally, I attempt to “constitutionalise AI policy” by grounding it in a framework of constitutional rights that guarantee protection to the most vulnerable sections of society.</p>
<blockquote class="synopsis" style="text-align: justify; ">In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in electronics, heavy electricals and automobiles.</blockquote>
<p style="text-align: justify; ">It is crucial to note that these cases, still emerging in India, have been implemented at scale in other countries such as the United Kingdom, United States and China. Projects were rolled out to the detriment of ethical and legal considerations. Hindsight should make the Indian policy ecosystem much wiser. By closely studying the research produced in these diverse contexts, Indian policy-makers should try to find ways around the ethical and legal challenges that cropped up elsewhere and devise policy solutions that mitigate the concerns raised.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">B<span>efore anything else we need to define AI—an endeavour fraught with multiple contestations. My colleagues and I at the Centre for Internet & Society ducked this hurdle when conducting our research by adopting a function-based approach. An AI system (as opposed to one that automates routine, cognitive or non-cognitive tasks) is a dynamic learning system that allows for the delegation of some level of human decision-making to the system. This definition allows us to capture some of the unique challenges and prospects that stem from the use of AI.</span></p>
<p style="text-align: justify; ">The research I contributed to at CIS identified key trends in the use of AI across India. In healthcare, it is used for descriptive and predictive purposes.</p>
<p style="text-align: justify; ">For example, the Manipal Group of Hospitals tied up with IBM’s Watson for Oncology to aid doctors in the diagnosis and treatment of seven types of cancer. It is also being used for analytical or diagnostic services. Niramai Health Analytix uses AI to detect early stage breast cancer and Adveniot Tecnosys detects tuberculosis through chest X-rays and acute infections using ultrasound images. In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in the electronics, heavy electricals and automobiles sector gradually adopting and integrating AI solutions into their products and processes.</p>
<p style="text-align: justify; ">It is also used in the burgeoning online lending segment in order to source credit score data. As many Indians have no credit scores, AI is used to aggregate data and generate scores for more than 80 per cent of the population who have no credit scores. This includes Credit Vidya, a Hyderabad-based data underwriting start-up that provides a credit score to first time loan-seekers and feeds this information to big players such as ICICI Bank and HDFC Bank, among others. It is also used by players such as Mastercard for fraud detection and risk management. In the finance world, companies such as Trade Rays are being used to provide user-friendly algorithmic trading services.</p>
<blockquote class="synopsis" style="text-align: justify; ">AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring.</blockquote>
<p style="text-align: justify; ">The next big development is in law enforcement. Predictive policing is making great strides in various states, including Delhi, Punjab, Uttar Pradesh and Maharashtra. A brainchild of the Los Angeles Police Department, predictive policing is the use of analytical techniques such as Machine Learning to identify probable targets for intervention to prevent crime or to solve past crime through statistical predictions.</p>
<p style="text-align: justify; ">Conventional approaches to predictive policing start with the mapping of locations where crimes are concentrated (hot spots) by using algorithms to analyse aggregated data sets. Police in Uttar Pradesh and Delhi have partnered with the Indian Space Research Organisation (ISRO) in a Memorandum of Understanding to allow ISRO’s Advanced Data Processing Research Institute to map, visualise and compile reports about crime-related incidents.</p>
<p style="text-align: justify; ">There are aggressive developments also on the facial recognition front. Punjab Police, in association with Gurugram-based start-up Staqu has started implementing the Punjab Artificial Intelligence System (PAIS) which uses digitised criminal records and automated facial recognition to retrieve information on the suspected criminal. At the national level, on June 28, the National Crime Records Bureau (NCRB) called for tenders to implement a centralised Automated Facial Recognition System (AFRS), defining the scope of work in broad terms as the “supply, installation and commissioning of hardware and software at NCRB.”</p>
<p style="text-align: justify; ">AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring. The Andhra Pradesh government had started collecting information from a range of databases and processes the information through Microsoft’s Machine Learning Platform to monitor children and devote student focussed attention on identifying and curbing school drop-outs.</p>
<p style="text-align: justify; ">In Andhra Pradesh, Microsoft collaborated with the International Crop Institute for Semi-Arid Tropics (ICRISAT) to develop an AI Sowing App powered by Microsoft’s Cortana Intelligence Suite. It aggregated data using Machine Learning and sent advisories to farmers regarding optimal dates to sow. This was done via text messages on feature phones after ground research revealed that not many farmers owned or were able to use smart phones. The NITI Aayog AI Strategy specifically cited this use case and reported that this resulted in a 10-30 per cent increase in crop yield. The government of Karnataka has entered into a similar arrangement with Microsoft.</p>
<p style="text-align: justify; ">Finally, in the defence sector, our research found enthusiasm for AI in intelligence, surveillance and reconnaissance (ISR) functions, cyber defence, robot soldiers, risk terrain analysis and moving towards autonomous weapons systems. These projects are being developed by the Defence Research and Development Organisation but the level of trust and support in AI-driven processes reposed by the wings of the armed forces is yet to be publicly clarified. India also had the privilege of leading the global debate on Lethal Autonomous Weapons Systems (LAWS) with Amandeep Singh Gill chairing the United Nations Group of Governmental Experts (UN-GGE) on the issue. However, ‘lethal’ autonomous weapons systems at this stage appear to be a speck in the distant horizon.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">A<span>long with the range of use cases described above, a patchwork of policy imperatives is emerging to support this ecosystem. The umbrella document is the National Strategy for Artificial Intelligence published by the NITI Aayog in June 2018. Despite certain lacunae in its scope, the existence of a cohesive and robust document that lends a semblance of certainty and predictability to a rapidly emerging sphere is in itself a boon. The document focuses on how India can leverage AI for both economic growth and social inclusion. The contents of the document can be divided into a few themes, many of which have also found their way into multiple other instruments.</span></p>
<p style="text-align: justify; ">NITI Aayog provides over 30 policy recommendations on investment in scientific research, reskilling, training and enabling the speedy adoption of AI across value chains. The flagship research initiative is a two-tiered endeavour to boost AI research in India. First, new centres of research excellence (COREs) will develop fundamental research. The COREs will act as feeders for international centres for transformational AI which will focus on creating AI-based applications across sectors.</p>
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/AIinCountries.jpg/@@images/16b4af34-cb6d-423c-be35-e45a60d501cf.jpeg" alt="AI in Countries" class="image-inline" title="AI in Countries" /></p>
<p style="text-align: justify; ">This is an impressive theoretical objective but questions surrounding implementation and structures of operation remain to be answered. China has not only conceptualised an ecosystem but through the Three Year Action Plan to Promote the Development of New Generation Artificial Intelligence Industry, it has also taken a whole-of-government approach to propelling the private sector to an e-leadership position. It has partnered with national tech companies and set clear goals for funding, such as the $2.1 billion technology park for AI research in Beijing.</p>
<p style="text-align: justify; ">The contents of the NITI document can be divided into a few themes, many of which have also found their way into multiple other instruments. First, it proposes an “AI+X” approach that captures the long-term vision for AI in India. Instead of replacing the processes in their entirety, AI is understood as an enabler of efficiency in processes that already exist. NITI Aayog therefore looks at the process of deploying AI-driven technologies as taking an existing process (X) and adding AI to them (AI+X). This is a crucial recommendation all AI projects should heed. Instead of waving AI as an all-encompassing magic wand across sectors, it is necessary to identify specific gaps AI can seek to remedy and then devise the process underpinning this implementation.</p>
<blockquote class="synopsis" style="text-align: justify; ">A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.</blockquote>
<p style="text-align: justify; ">The AI-driven intervention to develop sowing apps for farmers in Karnataka and Andhra Pradesh are examples of effective implementation of this approach. Instead of other knee-jerk reactions to agrarian woes such as a hasty raising of Minimum Support Price, effective research was done in this use-case to identify a lack of predictability in weather patterns as a key factor in productive crop yields. They realised that aggregation of data through AI could provide farmers with better information on weather patterns. As internet penetration was relatively low in rural Karnataka, text messages to feature phones that had a far wider presence was indispensable to the end game.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">T<span>his is in contrast to the ill-conceived path adopted by the Union ministry of electronics and information technology in guidelines for regulating social media platforms that host content (“intermediaries”). Rule 3(9) of the Draft of the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018 mandates intermediaries to use “automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.</span></p>
<p style="text-align: justify; ">Proposed in light of the fake news menace and the unbridled spread of “extremist” content online, the use of the phrase “automated tools or appropriate mechanisms” is reflective of an attitude that fails to consider ground realities that confront companies and users alike. They ignore, for instance, the cost of automated tools: whether automated content moderation techniques developed in the West can be applied to Indic languages or grievance redress mechanisms users can avail of if their online speech is unduly restricted. This is thus a clear case of the “AI” mantra being drawn out of a hat without studying the “X” it is supposed to remedy.</p>
<p style="text-align: justify; ">The second focus of the National Strategy that has since morphed into a technology policy mainstay across instruments is on data governance, access and utilisation. The document says the major hurdle to the large scale adoption of AI in India is the difficulty in accessing structured data. It recommends developing big annotated data sets to “democratise data and multi-stakeholder marketplaces across the AI value chain”. It argues that at present only one per cent of data can be analysed as it exists in various unconnected silos. Through the creation of a formal market for data, aggregators such as diagnostic centres in the healthcare sector would curate datasets and place them in the market, with appropriate permissions and safeguards. AI firms could use available datasets rather than wasting effort sourcing and curating the sets themselves.</p>
<p style="text-align: justify; ">A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.The first is “community data” and appears both in the Srikrishna Report that accompanied the draft Data Protection Bill in 2018 and the draft e-commerce policy.</p>
<p style="text-align: justify; ">But there appears to be some conflict between its usage in the two. Srikrishna endorses a collective protection of privacy by protecting an identifiable community that has contributed to community data. This requires the fulfilment of three key conditions: <i>first,</i> the data belong to an identifiable community; <i>second, </i>individuals in the community consent to being a part of it, and <i>third</i>, the community as a whole consents to its data being treated as community data. On the other hand, the Department of Promotion of Industry and Internal Trade’s (DPIIT) draft e-commerce policy looks at community data as “societal commons” or a “national resource” that gives the community the right to access it but government has ultimate and overriding control of the data. This configuration of community data brings into question the consent framework in the Srikrishna Bill.</p>
<blockquote class="synopsis" style="text-align: justify; ">The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well-intentioned but is fraught with core problems in implementation.</blockquote>
<p style="text-align: justify; ">The matter is further confused by treating “data as a public good”. This is projected in Chapter 4 of the 2019 Economic Survey published by the Ministry of Finance. It explicitly states that any configuration needs to be deferential to privacy norms and the upcoming privacy law. The “personal data” of an individual in the custody of a government is also a “public good” once the datasets are anonymised. At the same time, it pushes for the creation of a government database that links several individual databases, which leads to the “triangulation” problem, where matching different datasets together allows for individuals to be identified despite their anonymisation in seemingly disparate databases.</p>
<p style="text-align: justify; ">“Building an AI ecosystem” was also one of the ostensible reasons for data localisation—the government’s gambit to mandate that foreign companies store the data of Indian citizens within national borders. In addition to a few other policy instruments with similar mandates, Section 40 of the Draft Personal Data Protection Bill mandates that all “critical data” (this is to be notified by the government) be stored exclusively in India. All other data should have a live, serving copy stored in India even if transfer abroad is allowed. This was an attempt to ensure foreign data processors are not the sole beneficiaries of AI-driven insights.</p>
<p style="text-align: justify; ">The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well intentioned but is fraught with core problems in implementation. First, the notion of data as a national resource or as a public good walks a tightrope with constitutionally guaranteed protections around privacy, which will be codified in the upcoming Personal Data Protection Bill. My concerns are not quite so grave in the case of genuine “public data” like traffic signal data or pollution data. However, the Economic Survey manages to crudely amalgamate personal data into the mix.</p>
<p style="text-align: justify; ">It also states that personal data in the custody of a government is a public good once the datasets are anonymised. This includes transactions data in the User Payments Interface (UPI), administrative data including birth and death records, and institutional data including data in public hospitals or schools on pupils or patients. At the same time, it pushes for a government database that will lead to the triangulation problem outlined above. The chapter also suggests that said data may be sold to private firms (unclear if this includes foreign or domestic firms). This not only contradicts the notion of public good but is also a serious threat to the confidentiality and security of personal data.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">T<span>herefore, along with the concerted endeavour to create data marketplaces, it is crucial for policy-makers to differentiate between public data and personal data individuals may consent to be made public. The parameters for clearly defining free and informed consent, as codified in the Draft Personal Data Protection Bill need to be strictly followed as there is a risk of de-anonymisation of data once it finds its way into the marketplace. Second, it is crucial for policy-makers to define clearly a community and parameters for what constitutes individual consent to be part of a community. Finally, along with technical work on setting up a national data marketplace, there must be protracted efforts to guarantee greater security and standards of anonymisation.</span></p>
<blockquote class="synopsis" style="text-align: justify; ">The National Strategy mentions that India should position itself as a “garage” for AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their rights.</blockquote>
<p style="text-align: justify; ">Assuming that a constitutionally valid paradigm may be created, the excessive focus on data access by tech players dodges the question of the capabilities of analytic firms to process this data and derive meaningful insights from the information. Scholars on China, arguably the poster-child of data-driven economic growth, have sent mixed messages. Ding argues that despite having half the technical capabilities of the US, easy access to data gives China a competitive edge in global AI competition. On the contrary, Andrew Ng has argued that operationalising a sufficient number of relevant datasets still remains a challenge. Ng’s views are backed up by insiders at Chinese tech giant Tencent who say the company still finds it difficult to integrate data streams due to technical hurdles. NITI Aayog’s idea of a multi-stream data marketplace may theoretically be a solution to these potential hurdles but requires sustained funding and research innovation to be converted into reality.</p>
<p style="text-align: justify; ">The National Strategy suggests that government should create a multi-disciplinary committee to set up this marketplace and explore levers for its implementation. This is certainly the need of the hour. It also rightly highlights the importance of research partnerships between academia and the private sector, and the need to support start-ups. There is therefore an urgent need for innovative allied policy instruments that support the burgeoning start-up sector. Proposals such as data localisation may hurt smaller players as they will have to bear the increased fixed costs of setting up or renting data centres.</p>
<p style="text-align: justify; ">The National Strategy also incongruously mentions that India should position itself as a “garage” for the use of AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their fundamental rights. It could also imply that India should occupy a leadership position and work with other emerging economies to frame the global rights based discourse to seek equitable solutions for the application of AI that works to improve the plight of the most vulnerable in society.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">O<span>ur constitutional ethos places us in a unique position to develop a framework that enables the actualisation of this equitable vision—a goal the policy instruments put out thus far appear to have missed. While the National Strategy includes a section on privacy, security and ethical implications of AI, it stops short of rooting it in fundamental rights and constitutional principles. As a centralised policy instrument, the National Strategy deserves praise for identifying key levers in the future of India’s AI ecosystem and, with the exception of the concerns I outlined above, it is at par with the policy-making thought process in any other nation.</span></p>
<p style="text-align: justify; ">When we start the process of using constitutional principles for AI governance, we must remember that as per Article 12, an individual can file a writ against the state for violation of a fundamental right if the action is taken under the aegis of a “public function”. To combat discrimination by private actors, the state can enact legislation compelling private actors to comply with constitutional mandates. In July, Rajeev Chandrashekhar, a Rajya Sabha MP, suggested a law to combat algorithmic discrimination along the lines of the Algorithmic Accountability Bill proposed in the US Senate. There are three core constitutional questions along the lines of the “golden triangle” of the Indian Constitution any such legislation will need to answer—those of accountability and transparency, algorithmic discrimination and the guarantee of freedom of expression and individual privacy.</p>
<p style="text-align: justify; ">Algorithms are developed by human beings who have their own cognitive biases. This means ostensibly neutral algorithms can have an unintentional disparate impact on certain, often traditionally disenfranchised groups.</p>
<p style="text-align: justify; ">In the <i>MIT Technology Review</i>, Karen Hao explains three stages at which bias might creep in. The first stage is the framing of the problem itself. As soon as computer scientists create a deep-learning model, they decide what they want the model to finally achieve. However, frequently desired outcomes such as “profitability”, “creditworthiness” or “recruitability” are subjective and imprecise concepts subject to human cognitive bias. This makes it difficult to devise screening algorithms that fairly portray society and the complex medley of identities, attributes and structures of power that define it.</p>
<p style="text-align: justify; ">The second stage Hao mentions is the data collection phase. Training data could lead to bias if it is unrepresentative of reality or represents entrenched prejudice or structural inequality. For example, most Natural Language Processing systems used for Parts of Speech (POS) tagging in the US are trained on the readily available data sets from the <i>Wall Street Journal</i>. Accuracy would naturally decrease when the algorithm is applied to individuals—largely ethnic minorities—who do not mimic the speech of the <i>Journal</i>.</p>
<p style="text-align: justify; ">According to Hao, the final stage for algorithmic bias is data preparation, which involves selecting parameters the developer wants the algorithm to consider. For example, when determining the “risk-profile” of car owners seeking insurance premiums, geographical location could be one parameter. This could be justified by the ostensibly neutral argument that those residing in inner-city areas with narrower roads are more likely to have scratches on their vehicles. But as inner cities in the US have a disproportionately high number of ethnic minorities or other vulnerable socio-economic groups, “pin code” becomes a facially neutral proxy for race or class-based discrimination.</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">T<span>he right to equality has been carved into multiple international human rights instruments and into the Equality Code in Articles 14-18 of the Indian Constitution. The dominant approach to interpreting the right to equality by the Supreme Court has been to focus on “grounds” of discrimination under Article 15(1), thus resulting in a lack of recognition of unintentional discrimination and disparate impact.</span></p>
<p style="text-align: justify; ">A notable exception, as constitutional scholar Gautam Bhatia points out, is the case of <i>N.M. Thomas </i>which pertained to reservation in promotions. Justice Mathew argued that the test for inequality in Article 16(4) is an effects-oriented test independent of the formal motivation underlying a specific act. Justice Krishna Iyer and Mathew also articulated a grander vision wherein they saw the Equality Code as transcending the embedded individual disabilities in class driven social hierarchies. This understanding is crucial for governing data driven decision-making that impacts vulnerable communities. Any law or policy on AI-related discrimination must also include disparate impact within its definition of “discrimination” to ensure that developers think about the adverse consequences even of well-intentioned decisions.</p>
<p style="text-align: justify; ">AI driven assessments have been challenged on grounds of constitutional violations in other jurisdictions. In 2016, the Wisconsin Supreme Court considered the legality of using risk assessment tools such as COMPAS for sentencing criminals. It affirmed the trial court’s findings and held that using COMPAS did not violate constitutional due process standards. Eric Loomis had argued that using COMPAS infringed both his right to an individualised sentence and to accurate information as COMPAS provided data for specific groups and kept the methodology used to prepare the report a trade secret. He additionally argued that the court used unconstitutional gendered assessments as the tool used gender as one of the parameters.</p>
<p style="text-align: justify; ">The Wisconsin Supreme Court disagreed with Loomis arguing that COMPAS only used publicly available data and data provided by the defendant, which apparently meant Loomis could have verified any information contained in the report. On the question of individualisation, the court argued that COMPAS provided only aggregate data for groups similarly placed to the offender. However, it went on to argue as the report was not the sole basis for a decision by the judge, a COMPAS assessment would be sufficiently individualised as courts retained the discretion and information necessary to disagree.</p>
<p style="text-align: justify; ">By assuming that Loomis could have genuinely verified all the data collected about similarly placed groups and that judges would exercise discretion to prevent the entrenchment of inequalities through COMPAS’s decision-making patterns, the judges ignored social realities. Algorithmic decision-making systems are an extension of unequal decision-making that re-entrenches prevailing societal perceptions around identity and behaviour. An instance of discrimination cannot be looked at as a single instance but as one in a menagerie of production systems that define, modulate and regulate social existence.</p>
<p style="text-align: justify; ">The policy-making ecosystem needs, therefore, to galvanise the “transformative” vision of India’s democratic fibre and study existing systems and power structures AI could re-entrench or mitigate. For example, in the matter of bank loans there is a presumption against the credit-worthiness of those working in the informal sector. The use of aggregated decision-making may lead to more equitable outcomes given that there is concrete thought on the organisational structures making these decisions and the constitutional safeguards provided.</p>
<p style="text-align: justify; ">Most case studies on algorithmic discrimination in Virgina Eubanks’ <i>Automating Inequality </i>or Safiya Noble’s <i>Algorithms of Oppression</i> are based on western contexts. There is an urgent need for publicly available empirical studies on pilot cases in India to understand the contours of discrimination. Primary research questions should explore three related subjects. Are specified ostensibly neutral variables being used to exclude certain communities from accessing opportunities and resources or having a disproportionate impact on their civil liberties? Is there diversity in the identities of the coders themselves? Are the training data sets used representative and diverse and, finally, what role does data driven decision-making play in furthering the battle against embedded structural hierarchies?</p>
<p style="text-align: justify; ">***</p>
<p style="text-align: justify; ">A key feature of AI-driven solutions is the “black box” that processes inputs and generates actionable outputs behind a veil of opacity to the human operator. Essentially, the black box denotes that aspect of the human neural decision-making function that has been delegated to the machine. A lack of transparency or understanding could lead to what Frank Pasquale terms a “Black Box Society” where algorithms define the trajectories of daily existence unless “the values and prerogatives of the encoded rules hidden within black boxes” are challenged.</p>
<p style="text-align: justify; ">Ex-<i>post facto</i> assessment is often insufficient for arriving at genuine accountability. For example, the success of predictive policing in the US was drawn from the fact that police have indeed found more crimes in areas deemed “high risk”. But this assessment does not account for the fact that this is a product of a vicious cycle through which more crime is detected in an area simply because more policemen are deployed. Here, the National Strategy rightly identifies that simply opening up code may not deconstruct the black box as not all stakeholders impacted by AI solutions may understand the code. The constant aim should be explicability which means the human developer should be able to explain how certain factors may be used to arrive at a certain cluster of outcomes in a given set of situations.</p>
<p style="text-align: justify; ">The requirement of accountability stems from the Right to Life provision under Article 21. As stated in the seven-judge bench in <i>Maneka Gandhi vs. Union of India</i>, any procedure established by law must be seen to be “fair, just and reasonable” and not “fanciful, oppressive or arbitrary.”</p>
<p style="text-align: justify; ">The Right to Privacy was recognised as a fundamental right by the nine-judge bench in <i>K.S. Puttaswamy (Retd.) vs. Union of India</i>. Mass surveillance can lead to the alteration of behavioural patterns which may in turn be used for the suppression of dissent by the State. Pulling vast tracts of data on all suspected criminals—as in facial recognition systems like PAIS—create a “presumption of criminality” that can have a chilling effect on democratic values.</p>
<p style="text-align: justify; ">Therefore, any use, particularly by law enforcement would need to satisfy the requirements for infringing on the right to privacy: the existence of a law, necessity—a clearly defined state objective—and proportionality between the state object and the means used restricting fundamental rights the least. Along with centralised policy instruments such as the National Strategy, all initiatives taken in pursuance of India’s AI agenda must pay heed to the democratic virtues of privacy and free speech and their interlinkages.</p>
<p style="text-align: justify; ">India needs a law to regulate the impact of Artificial Intelligence and enable its development without restricting fundamental rights. However, regulation should not adopt a “one-size-fits-all” approach that views all uses with the same level of rigidity. Regulatory intervention should be based on questions around power asymmetries and the likelihood of the use case adversely affronting human dignity captured by India’s constitutional ethos.</p>
<blockquote class="synopsis" style="text-align: justify; ">As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual.</blockquote>
<p style="text-align: justify; ">The High Level Task Force on Artificial Intelligence (AI HLEG) set up by the European Commission in June 2018 published a report on “Ethical Guidelines for Trustworthy AI” earlier this year. They feature seven core requirements which include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. While the principles are comprehensive, this document stops short of referencing any domestic or international constitutional law that helps cement these values. The Indian Constitution can help define and concretise each of these principles and could be used as a vehicle to foster genuine social inclusion and mitigation of structural injustice through AI.</p>
<p style="text-align: justify; ">At the centre of the vision must be the inherent rights of the individual. The constitutional moment for data driven decision-making emerges therefore when we conceptualise a way through which AI can be utilised to preserve and improve the enforcement of rights while also ensuring that data does not become a further avenue for exploitation.</p>
<p style="text-align: justify; ">National vision transcends the boundaries of policy and to misuse Peter Drucker, “eats strategy for breakfast”. As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual, particularly the vulnerable in society. While the multiple policy instruments and the National Strategy are important cogs in the wheel, the long-term vision can only be framed by how the plethora of actors, interest groups and stakeholders engage with the notion of an AI-powered Indian society.</p>
<hr style="text-align: justify; " />
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision'>https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision</a>
</p>
No publisherbasuInternet GovernanceArtificial Intelligence2019-10-14T13:55:59ZBlog EntryUnpacking Algorithmic Infrastructures: Mapping the Data Supply Chain in the Healthcare Industry in India
https://cis-india.org/raw/unpacking-algorithmic-infrastructures
<b>The Unpacking Algorithmic Infrastructures project, supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to study the Al data supply chain infrastructure in healthcare in India, and aims to critically analyse auditing frameworks that are utilised to develop and deploy AI systems in healthcare. It will map the prevalence of Al auditing practices within the sector to arrive at an understanding of frameworks that may be developed to check for ethical considerations - such as algorithmic bias and harm within healthcare systems, especially against marginalised and vulnerable populations. </b>
<p style="text-align: justify; ">There has been an increased interest in health data in India over the recent years, where health data policies encourage sharing of data with different entities, at the same time, there has been a growing interest in deployment of Al in healthcare from startups, hospitals, as well as multinational technology companies.</p>
<p style="text-align: justify; ">Given the invisibility of algorithmic infrastructures that underlie the digital economy and the important decisions these technologies can make about patients' health, it's important to look at how these systems are developed, how data flows within them, how these systems are tested and verified and what ethical considerations inform their deployment.</p>
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/ResearchersWork.png/@@images/00a848c7-b7f7-41b4-8bd9-45f2928fd44e.png" alt="Researchers at Work" class="image-inline" title="Researchers at Work" /></p>
<p style="text-align: justify; "><strong>The </strong><strong>Unpacking Algorithmic Infrastructures</strong> project, supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to study the Al data supply chain infrastructure in healthcare in India, and aims to critically analyse auditing frameworks that are utilised to develop and deploy AI systems in healthcare. It will map the prevalence of Al auditing practices within the sector to arrive at an understanding of frameworks that may be developed to check for ethical considerations - such as algorithmic bias and harm within healthcare systems, especially against marginalised and vulnerable populations.</p>
<h3 style="text-align: justify; ">Research Questions</h3>
<ol>
<li style="text-align: justify; ">To what extent organisations take ethical principles into account when developing AI , managing the training and testing dataset, and while deploying the AI in the healthcare sector.</li>
<li style="text-align: justify; ">What best practices for auditing can be put in place based on our critical understanding of AI data supply chains and auditing frameworks being employed in the healthcare sector.</li>
<li style="text-align: justify; ">What is a possible auditing framework that is best suited to organisations in the majority world.</li>
</ol>
<h3>Research Design and Methods</h3>
<p>For this study, we will use a comprehensive mixed methods approach. We will survey professionals working towards designing, developing and deploying AI systems for healthcare in India, across technology and healthcare organizations. We will also undertake in-depth interviews with experts who are part of key stakeholder groups.</p>
<p>We hereby invite researchers, technologists, healthcare professionals, and others working at the intersection of Artificial Intelligence and Healthcare to speak to us and help us inform the study. You may contact Shweta Monhandas at <a href="mailto:shweta@cis-india.org">shweta@cis-india.org</a></p>
<ol> </ol>
<hr />
<p>Research Team: Amrita Sengupta, Chetna V. M., Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and Yatharth.</p>
<p>
For more details visit <a href='https://cis-india.org/raw/unpacking-algorithmic-infrastructures'>https://cis-india.org/raw/unpacking-algorithmic-infrastructures</a>
</p>
No publisherAmrita Sengupta, Chetna V. M., Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and YatharthHealth TechRAW BlogResearchData ProtectionHealthcareResearchers at WorkArtificial Intelligence2024-01-05T02:38:22ZBlog EntryUNESCAP Google AI Meeting
https://cis-india.org/internet-governance/news/unescap-google-ai-meeting
<b>Arindrajit was a panelist at the event on AI in public service delivery hosted by UNESCAP Bangkok on August 29, 2018. The event was co-organized by Economic and Social Commission for Asia and the Pacific and Google.</b>
<p style="text-align: justify; ">The discussion centered around the two questions (1) Is AI different from other technological advancements in the past and (2) Recommendations for policy-makers to enhance AI in Public Service Delivery.The other panelists were Dr. Urs Gasser (Berkman), Vidushi Marda ( Art.19), Malavika Jayaram (Digital Asia Hub) and Jake Lucchi ( Google) The panel was a platform to discuss some of our findings in our case studies on healthcare and agriculture, which we will receive comments on and will get published in November.<br /><br /></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/unescap-google-ai-meeting'>https://cis-india.org/internet-governance/news/unescap-google-ai-meeting</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-09-20T15:47:42ZNews ItemUNDP joins Tech Giants in Partnership on AI
https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai
<b>UNDP joins the Partnership on Artificial Intelligence (AI), a consortium of companies, academics, and NGOs working to ensure that AI is developed in a safe, ethical, and transparent manner. Founded in 2016 by the tech giants - Amazon, DeepMind/Google, Facebook, IBM, and Microsoft - It has since been joined by industry leaders such as Accenture, Intel, Oxford Internet Institute - University of Oxford, eBay, as well as non profit organizations such as UNICEF and Human Rights Watch and many more.</b>
<p style="text-align: justify; ">This was published by <a class="external-link" href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/undp-joins-tech-giants-in-partnership-on-ai.html">UNDP</a> on its website on August 1, 2018.</p>
<hr />
<p style="text-align: justify; ">Through the partnership, UNDP’s Innovation Facility will work with partners and communities to responsibly test and scale the use of AI to achieve the Sustainable Development Goals. By harnessing the power of data, we can inform risk, policy and program evaluation, we also can utilize robotics and Internet of Things (IoT) to collect data and reach the previously deemed unreachable - to leave no one behind.</p>
<p style="text-align: justify; ">UNDP’s AI portfolio is growing rapidly. Drones and remote sensing are used to improve data collection and inform decisions: in the Maldives for disaster preparedness, and in Uganda to engage refugee and host communities in jointly developing infrastructures. We partnered with IBM to automate <a href="http://www.undp.org/content/undp/en/home/blog/2018/ai-and-the-future-of-our-work.html">UNDP’s Rapid Integrated Assessment</a>, aligning national development plans and sectoral strategies with the 169 Sustainable Development Goals’ targets; and with the UNEP, UNDP has launched the <a href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/un-biodiversity-lab-launched-to-revolutionize-biodiversity-plann.html">UN Biodiversity Lab</a>, powered by MapX. The spatial data platform will help countries support conservation efforts and accelerate delivery of the 2030 Agenda.</p>
<p style="text-align: justify; ">In line with UNDP’s Strategic Plan 2018-2021, innovation plays a central role in fulfilling the organization’s mission and achieving the Sustainable Development Goals. Benjamin Kumpf, UNDP’s Innovation Facility Lead states, “advances in robotics and AI have the potential to radically redefine human development pathways. The path to such redefinitions entails concrete AI experiments to increase the effectiveness of our work as well as norm-setting: we have to think beyond guidelines for ethical AI to designing accountability frameworks.”</p>
<p style="text-align: justify; ">The Partnership on AI aims to advance public understanding of AI, formulate best practices, and serve as an open platform for discussion and engagement about AI and its influences on people and society.</p>
<p style="text-align: justify; "><b>Full list of partners</b></p>
<p style="text-align: justify; ">Amazon, Apple, Deepmind, Facebook, Google, IBM, Microsoft, Aaai, ACLU, Accenture, Affectiva, Ai Forum New Zealand, Ai Now Institute, The Allen Institute For Artificial Intelligence (Ai2), Amnesty International, Article 19, Association For Computing Machinery, Center For Democracy & Technology (Cdt), Center For Human-compatible Artificial Intelligence, Center For Information Technology Policy Princeton University, Centre For Internet And Society, India (Cis), Leverhulme Centre For The Future of Intelligence (Cfi), Cogitai, Data & Society Research Institute, Digital Asia Hub, Doteveryone, Ebay, Element Ai, Electronic Frontier Foundation (Eff), Fraunhofer Iao, The Future of Humanity, Future of Life Institute, The Future of Privacy Forum, The Hastings Center, Hong Kong University of Science And Technology Department Of Electronic & Computer Engineering, Human Rights Watch, Intel, Markkula Center For Applied Ethics Santa Clara University, Mckinsey & Company, Nvidia, Omidyar Network Openai, Oxford Internet Institute - University of Oxford, Salesforce, SAP, Sony, Tufts University Hri Lab, UCL Engineering, UNDP, UNICEF, University of Washington Tech Policy Lab, Upturn, Xprize, Zalando</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai'>https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-13T15:51:48ZNews ItemUnbox Festival 2019: CIS organizes two Workshops
https://cis-india.org/internet-governance/blog/unbox-2019-festival
<b>Centre for Internet & Society organized two workshops at the Unbox Festival 2019, in Bangalore, on 15 and 17 February 2019. </b>
<h3 style="text-align: justify; ">'What is your Feminist Infrastructure Wishlist?</h3>
<p style="text-align: justify; ">The first workshop 'What is your Feminist Infrastructure Wishlist?' was on Feminist Infrastructure Wishlists that was conducted by P.P. Sneha and Saumyaa Naidu on 15 February 2019. The objective of the workshop was to explore what it means to have infrastructure that is feminist. How do we build spaces, networks, and systems that are equal, inclusive, diverse, and accessible? We will also reflect on questions of network configurations, expertise, labour and visibility. For reading material <a class="external-link" href="https://feministinternet.org/">click here</a>.</p>
<h3 style="text-align: justify; ">AI for Good</h3>
<p style="text-align: justify; ">With a backdrop of AI for social good, we explore existing applications of artificial intelligence, how we interact and engage with this technology on a daily basis. A discussion led by Saumyaa Naidu and Shweta Mohandas invited participants to examine current narratives around AI and imagine how these may transform with time. Questions around how we can build an AI for the future will become the starting point to trace its implications relating to social impact, policy, gender, design, and privacy. For reading materials see <a class="external-link" href="https://ainowinstitute.org/AI_Now_2018_Report.pdf">AI Now Report 2018</a>, <a class="external-link" href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">Machine Bias</a>, and <a class="external-link" href="https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistants-have-feminine-names/475884/">Why Do So Many Digital Assistants Have Feminine Names?</a></p>
<p style="text-align: justify; ">For info on Unbox Festival, <a class="external-link" href="http://unboxfestival.com/">click here</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/unbox-2019-festival'>https://cis-india.org/internet-governance/blog/unbox-2019-festival</a>
</p>
No publishersaumyaaGenderInternet GovernanceArtificial Intelligence2019-02-26T01:53:39ZBlog EntryTowards Algorithmic Transparency
https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency
<b>This policy brief examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence.</b>
<p> </p>
<p>This brief proposes a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggests solutions such as the incorporation of audits, and ex ante approaches to building interpretable models right from the design stage. Read the full report <a href="https://cis-india.org/internet-governance/algorithmic-transparency-pdf" class="internal-link" title="Algorithmic Transparency PDF">here</a>.</p>
<p> </p>
<p> </p>
<p>The Regulatory Practices Lab at CIS aims to produce regulatory policy
suggestions focused on India, but with global application, in an agile
and targeted manner and to promote transparency around practices
affecting digital rights. <br />The Regulatory Practices Lab is supported by Google and Facebook.<br /><br /></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency'>https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency</a>
</p>
No publisherRadhika Radhakrishnan, and Amber SinhaRegulatory Practices LabInternet GovernanceFeaturedAlgorithmsinternet governanceTransparencyArtificial Intelligence2020-07-15T13:16:44ZBlog EntryThe Wolf in Sheep's Clothing: Demanding your Data
https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data
<b>The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence (AI) and Machine Learning (ML) has given rise to transformational business models across several sectors.</b>
<p> </p>
<p>This piece was originally published in <a class="external-link" href="https://telecom.economictimes.indiatimes.com/tele-talk/the-wolf-in-sheep-s-clothing-demanding-your-data/4497">The Economic Times Telecom</a>, on 8 September, 2020.<span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"></span></p>
<p>The increasing digitalization of the economy and ubiquity of the <a href="https://telecom.economictimes.indiatimes.com/tag/internet">Internet</a>, coupled with developments in <a href="https://telecom.economictimes.indiatimes.com/tag/artificial+intelligence">Artificial Intelligence</a>
(AI) and Machine Learning (ML) has given rise to transformational
business models across several sectors. These developments have changed
the very structure of existing sectors, with a few dominant firms
straddling across many sectors. The position of these firms is
entrenched due to the large amounts of data they have, and usage of
sophisticated algorithms that deliver very targeted service/content and
their global nature.<br /><br /></p>
<p>Such data based network businesses
are generally multi-sided platforms subject to network effects and
winner takes all phenomena, often, making traditional competition
regulation inappropriate. In addition, there has been concern that such
companies hurt competition as they are owners of large amounts of data
collected globally, the very basis on which new services are predicated.
Also since users have an inertia to share their data on multiple
platforms, new companies find it very challenging to emerge. Several of
the large companies are of US origin. Several regions/countries such as
EU, UK, India are concerned that while these companies benefit from the
data of their citizens or their <a href="https://telecom.economictimes.indiatimes.com/tag/devices">devices</a>,
SMEs and other companies in their own countries find it increasingly
difficult to remain viable or achieve scale. With the objective of
supporting enterprises, including SMEs in their own countries, Europe,
UK India are in different stages of data regulation initiatives.<br /><br /></p>
<p>In India, the <a href="https://telecom.economictimes.indiatimes.com/tag/personal+data+protection">Personal Data Protection</a>
(PDP) Bill, 2019 deals with the framework for collecting, managing and
transferring of Personal Data of Indian citizens, including mandating
sharing of anonymized data of individuals and non-personal data for
better targeting of services or policy making. In addition, the Report
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up
with a Framework for Regulating NPD. Since the NPD Report is a more
recent phenomenon, this articles analyzes some aspects of it.<br /><br /></p>
<p>According
to CoE, non-personal data could be of two types. First, data or
information which was never about an individual (e.g. weather data).
Second, data or information that once was related to an individual (e.g.
mobile number) but has now ceased to be identifiable due to the removal
of certain identifiers through the process of ‘anonymisation’. However,
it may be possible to recover the personal data from such anonymized
data and therefore, the distinction between personal and non-personal is
not clean. In any case, the PDP bill 2019 deals with personal data. If
the CoE felt that some aspect of personal data (including anonymized
data) were not adequately dealt with, it should work to strengthen it.
The current approach of the CoE is bound to create confusion and
overlapping jurisdiction. Since anonymized data is required to be
shared, there are disincentives to anonymization, causing greater risk
to individual privacy.<br /><br /></p>
<p>A new class of business based on a “<em>horizontal classification cutting across different industry sectors</em>” is defined. This refers to any business that derives “<em>new or additional economic value from data, by collecting, storing, processing, and managing data</em>”
based on a certain threshold of data collected/processed that will be
defined by the regulatory authority that is outlined in the report. The
CoE also recommends that “<em>Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data</em>” without any remuneration. Further, “<em>By
looking at the meta-data, potential users may identify opportunities
for combining data from multiple Data Businesses and/or governments to
develop innovative solutions, products and services. Subsequently, data
requests may be made for the detailed underlying data</em>”.<br /><br /></p>
<p>With
increasing digitalization, today almost every business is a data
business. The problem in such categorization will be with the definition
of thresholds. It is likely that even a small video sharing app or an
AR/VR app would store/collect/process/transmit more data than say a
mid-sized bank in terms of data volumes. Further, with increasing
embedding of <a href="https://telecom.economictimes.indiatimes.com/tag/iot">IoT</a>
in various aspects of our lives and businesses (smart manufacturing,
logistics, banking etc), the amount of data that is captured by even
small entities can be huge.<br /><br /></p>
<p>The private sector, driven by
profitability, identifies innovative business models, risks capital and
finds unique ways of capturing and melding different data sets. In
order to sustain economic growth, such innovation is necessary. The
private sector would also like legal protection over these aspects of
its businesses, including the unique IPR that may be embedded in the
processing of data or its business processes. But mandating such onerous
requirements on sharing by the CoE is going to kill any private
initiative. Any regulatory regime must balance between the need to
provide a secure environment for protecting data of incumbents and
making it available to SMEs/businesses.<br /><br /></p>
<p>Meta data
provides insights to the company’s databases and processes. These are
source of competitive advantage for any company. Meta data is not
without a context. The basis of demanding such disclosure is mandated
with the proposed NPD Regulator who would evaluate such a purpose. In
practice, purposes are open to interpretation and the structure of
appeal mechanism etc is going to stall any such sharing. Would such
mandates of sharing not interfere with the existing Intellectual
Property Rights? Or the freedom to contract? Any innovation could easily
be made available to a competitor that front-ends itself with a
start-up. To mandate making such data available would not be fair.
Further, how would the NPD regulator even ensure that such data is used
for the purpose (which the proposed regulator is supposed to evaluate)
that it is sought for? In Europe, where such <a href="https://telecom.economictimes.indiatimes.com/tag/data+sharing">data sharing</a>
mandates are being considered, the focus is on public data. For private
entities, the sharing is largely based on voluntary contributions.
Compulsory sharing is mandated only under restricted situations where
market failure situations are not addressed through Competition Act and
provided legitimate interest of the data holder and existing legal
provisions are taken into account.<br /><br /></p>
<p>Further, the
compliance requirements for such Data Businesses is very onerous and
makes a mockery of “minimum government” framework of the government. The
CoE recommends that all Data Businesses, whether government NGO, or
private “<em>to disclose data elements collected, stored and processed, and data-based services offered</em>”. As if this was not enough, the CoE further recommends that “<em>Every
Data Business must declare what they do and what data they collect,
process and use, in which manner, and for what purposes (like disclosure
of data elements collected, where data is stored, standards adopted to
store and secure data, nature of data processing and data services
provided). This is similar to disclosures required by pharma industry
and in food products</em>”. Such disclosures are necessary in these
industries as the companies in this sector deal with critical aspects of
human life. But are such requirements necessary for all activities and
businesses? As long as organizations collect and process data, in a
legal manner, within the sectoral regulation, why should such
information have to be “reported”? Further, such bureaucratic processes
and reporting requirements are only going to be a burden to existing
legitimate businesses and give rise to a thriving regulatory license
raj.<br /><br /></p>
<p>Further questions that arise are: How is any
compliance agency going to make sure that all the underlying metadata is
made available in a timely manner? As companies respond to a dynamic
environment, their analysis and analytical tools change and so does the
metadata. This inherent aspect of businesses raises the question: At
what point in time should companies make their meta-data available? How
will the compliance be monitored?<br /><br /></p>
<p>Conclusion: The CoE
needs to create an enabling and facilitating an environment for data
sharing. The incentives for different types of entities to participate
and contribute must be recognized. Adequate provisions for risks and
liabilities arising out data sharing need to be thought through.
National initiatives on data sharing should not create an onerous
reporting regime, as envisaged by the CoE, even if digital.<br /><br /></p>
<p class="article-disclaimer"><em>DISCLAIMER:
The views expressed are solely of the author and ETTelecom.com does not
necessarily subscribe to it. ETTelecom.com shall not be responsible for
any damage caused to any person/organisation directly or indirectly.</em></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data'>https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data</a>
</p>
No publisherRekha JainInternet GovernanceData ProtectionArtificial Intelligence2020-11-10T17:44:13ZBlog EntryThe Srikrishna Committee Data Protection Bill and Artificial Intelligence in India
https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india
<b>Artificial Intelligence in many ways is in direct conflict with traditional data protection principles and requirements including consent, purpose limitation, data minimization, retention and deletion, accountability, and transparency.</b>
<h3 style="text-align: justify; ">Privacy Considerations in AI</h3>
<p style="text-align: justify; ">Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a></p>
<p style="text-align: justify; ">The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems<a href="#_ftn2" name="_ftnref2"><sup><sup>[2]</sup></sup></a> have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc. have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.<a href="#_ftn3" name="_ftnref3"><sup><sup>[3]</sup></sup></a></p>
<p style="text-align: justify; ">India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.</p>
<p style="text-align: justify; ">In this article we attempt to analyse if and how the Srikrishna committee draft Bill and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.</p>
<h3 style="text-align: justify; ">The GDPR and Artificial Intelligence</h3>
<p style="text-align: justify; ">The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.<a href="#_ftn4" name="_ftnref4"><sup><sup>[4]</sup></sup></a> Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:</p>
<ol style="text-align: justify; ">
<li><b>Data rights: </b> The GDPR enables individuals with a number of data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling. The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them. There are three exceptions to this right - if the automated decision making is: a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.<a href="#_ftn5" name="_ftnref5"><sup><sup>[5]</sup></sup></a> </li>
<li><b>Transparency:</b> Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.<a href="#_ftn6" name="_ftnref6"><sup><sup>[6]</sup></sup></a> Importantly, this requirement has the potential of ensuring that companies do not operate complete ‘black box’ algorithms within their business processes.</li>
<li><b>Fairness: </b>The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and ensuring that processing that does not result in negative discriminatory results.<a href="#_ftn7" name="_ftnref7"><sup><sup>[7]</sup></sup></a> </li>
<li><b>Purpose Limitation:</b> The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes. Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.<a href="#_ftn8" name="_ftnref8"><sup><sup>[8]</sup></sup></a> </li>
<li><b>Privacy by Design and Default:</b> Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.<a href="#_ftn9" name="_ftnref9"><sup><sup>[9]</sup></sup></a></li>
<li><b>Data Protection Impact Assessments:</b> Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.<a href="#_ftn10" name="_ftnref10"><sup><sup>[10]</sup></sup></a></li>
<li><b>Security:</b> Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization. </li>
</ol>
<h3 style="text-align: justify; ">Srikrishna Committee Bill and AI</h3>
<p style="text-align: justify; ">The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:</p>
<ol style="text-align: justify; ">
<li><b>Definition of Harm:</b> The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.</li>
</ol>
<p style="text-align: justify; ">Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Rights:</b> Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,<a href="#_ftn11" name="_ftnref11"><sup><sup>[11]</sup></sup></a> the right to opt out of automated decision making<a href="#_ftn12" name="_ftnref12"><sup><sup>[12]</sup></sup></a>, and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.<a href="#_ftn13" name="_ftnref13"><sup><sup>[13]</sup></sup></a> As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Quality</b>: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.<a href="#_ftn14" name="_ftnref14"><sup><sup>[14]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Principle of Privacy by Design:</b> Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">A number of these (a, d, e, and g) require that the interest of the data principal is accounted for throughout the processing of personal data, This will be significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.<a href="#_ftn15" name="_ftnref15"><sup><sup>[15]</sup></sup></a> If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Protection Impact Assessment:</b> Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely. This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.<a href="#_ftn16" name="_ftnref16"><sup><sup>[16]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Classification of data fiduciaries as significant data fiduciaries</b>: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.<a href="#_ftn17" name="_ftnref17"><sup><sup>[17]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Restrictions on cross border transfer of personal data: </b>Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">It is interesting to note that in the context of cross border sharing of data, the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.<a href="#_ftn18" name="_ftnref18"><sup><sup>[18]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Powers and functions of the Authority</b>: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.<a href="#_ftn19" name="_ftnref19"><sup><sup>[19]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Fair and reasonable processing: </b>Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.<a href="#_ftn20" name="_ftnref20"><sup><sup>[20]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Purpose Limitation: </b>Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,<a href="#_ftn21" name="_ftnref21"><sup><sup>[21]</sup></sup></a> and instead creates an exception for research, archiving, or statistical purposes.<a href="#_ftn22" name="_ftnref22"><sup><sup>[22]</sup></sup></a> The DPA has the responsibility of developing codes defining research purposes under the act.<a href="#_ftn23" name="_ftnref23"><sup><sup>[23]</sup></sup></a></p>
<ol style="text-align: justify; ">
<li><b>Security Safeguards:</b> Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.<a href="#_ftn24" name="_ftnref24"><sup><sup>[24]</sup></sup></a></li>
</ol>
<p style="text-align: justify; ">Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna uses Bill uses term de-identification. The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. <a href="#_ftn25" name="_ftnref25"><sup><sup>[25]</sup></sup></a></p>
<h3 style="text-align: justify; ">Technical perspectives of Privacy and AI</h3>
<p style="text-align: justify; ">There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws<a href="#_ftn26" name="_ftnref26"><sup><sup>[26]</sup></sup></a>, as well as research into AI privacy enhancing technologies.<a href="#_ftn27" name="_ftnref27"><sup><sup>[27]</sup></sup></a> Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.<a href="#_ftn28" name="_ftnref28"><sup><sup>[28]</sup></sup></a> . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority<a href="#_ftn29" name="_ftnref29"><sup><sup>[29]</sup></sup></a> break such methods into three categories:</p>
<ol style="text-align: justify; ">
<li>Techniques for reducing the need for large amounts of training data: Such techniques can include</li>
<ol>
<li><b>Generative adversarial networks (GANs):</b> GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.</li>
<li><b>Federated Learning:</b> Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the centralized model.</li>
<li><b>Matrix Capsules</b>: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.<a href="#_ftn30" name="_ftnref30"><sup><sup>[30]</sup></sup></a></li>
</ol>
<li>Techniques that uphold data protection without reducing the basic data set</li>
<ol>
<li><b>Differential Privacy</b>: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.</li>
<li><b>Homomorphic Encryption:</b> Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes</li>
<li><b>Transfer Learning</b>: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed. </li>
<li><b>RAIRD</b>: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.<a href="#_ftn31" name="_ftnref31"><sup><sup>[31]</sup></sup></a></li>
</ol>
<li>Techniques to move beyond opaque algorithms</li>
<ol>
<li><b>Explainable AI (XAI): </b>DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.<a href="#_ftn32" name="_ftnref32"><sup><sup>[32]</sup></sup></a> DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.<a href="#_ftn33" name="_ftnref33"><sup><sup>[33]</sup></sup></a></li>
<li><b>Local Interpretable Model Agnostic Explanations</b>: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.<a href="#_ftn34" name="_ftnref34"><sup><sup>[34]</sup></sup></a></li>
</ol> </ol>
<h3 style="text-align: justify; ">Public Sector use of AI and Privacy</h3>
<p style="text-align: justify; ">The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.</p>
<p style="text-align: justify; ">Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of <i>AK Gopalan</i> there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through <i>RC Cooper</i><a href="#_ftn35" name="_ftnref35"><sup><sup>[35]</sup></sup></a> and <i>Maneka</i><a href="#_ftn36" name="_ftnref36"><sup><sup>[36]</sup></sup></a>, the Supreme Court took steps to reverse the majority opinion in <i>Gopalan</i> and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in <i>Kharak Singh</i> where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in <i>Kharak Singh</i> draws heavily from<i> Munn</i> v. <i>Illinois</i><a href="#_ftn37" name="_ftnref37"><sup><sup>[37]</sup></sup></a> which held life to be “more than mere animal existence.” Curiously, after taking this position <i>Kharak Singh</i> fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in <i>Kharak Singh</i> was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. <i>Maneka</i> which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.</p>
<p style="text-align: justify; ">The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.</p>
<p style="text-align: justify; ">While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.</p>
<h3 style="text-align: justify; ">Conclusion</h3>
<p style="text-align: justify; ">The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.</p>
<p style="text-align: justify; ">Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.</p>
<p style="text-align: justify; ">One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; "><a href="#_ftnref1" name="_ftn1"><sup><sup>[1]</sup></sup></a> https://privacyinternational.org/topics/artificial-intelligence</p>
<p style="text-align: justify; "><a href="#_ftnref2" name="_ftn2"><sup><sup>[2]</sup></sup></a> https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/</p>
<p style="text-align: justify; "><a href="#_ftnref3" name="_ftn3"><sup><sup>[3]</sup></sup></a> https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/</p>
<p style="text-align: justify; "><a href="#_ftnref4" name="_ftn4"><sup><sup>[4]</sup></sup></a> https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref5" name="_ftn5"><sup><sup>[5]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref6" name="_ftn6"><sup><sup>[6]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref7" name="_ftn7"><sup><sup>[7]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref8" name="_ftn8"><sup><sup>[8]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref9" name="_ftn9"><sup><sup>[9]</sup></sup></a> https://gdpr-info.eu/art-25-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref10" name="_ftn10"><sup><sup>[10]</sup></sup></a> https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/</p>
<p style="text-align: justify; "><a href="#_ftnref11" name="_ftn11"><sup><sup>[11]</sup></sup></a> https://gdpr-info.eu/art-21-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref12" name="_ftn12"><sup><sup>[12]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref13" name="_ftn13"><sup><sup>[13]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref14" name="_ftn14"><sup><sup>[14]</sup></sup></a>Draft Data Protection Bill 2018 - Chapter II section 9</p>
<p style="text-align: justify; "><a href="#_ftnref15" name="_ftn15"><sup><sup>[15]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 29</p>
<p style="text-align: justify; "><a href="#_ftnref16" name="_ftn16"><sup><sup>[16]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 33</p>
<p style="text-align: justify; "><a href="#_ftnref17" name="_ftn17"><sup><sup>[17]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 38</p>
<p style="text-align: justify; "><a href="#_ftnref18" name="_ftn18"><sup><sup>[18]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VIII section 40</p>
<p style="text-align: justify; "><a href="#_ftnref19" name="_ftn19"><sup><sup>[19]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter X section 60</p>
<p style="text-align: justify; "><a href="#_ftnref20" name="_ftn20"><sup><sup>[20]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 4</p>
<p style="text-align: justify; "><a href="#_ftnref21" name="_ftn21"><sup><sup>[21]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 5</p>
<p style="text-align: justify; "><a href="#_ftnref22" name="_ftn22"><sup><sup>[22]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter IX Section 45</p>
<p style="text-align: justify; "><a href="#_ftnref23" name="_ftn23"><sup><sup>[23]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter XIV section 97</p>
<p style="text-align: justify; "><a href="#_ftnref24" name="_ftn24"><sup><sup>[24]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 31</p>
<p style="text-align: justify; "><a href="#_ftnref25" name="_ftn25"><sup><sup>[25]</sup></sup></a> Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref26" name="_ftn26"><sup><sup>[26]</sup></sup></a> https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref27" name="_ftn27"><sup><sup>[27]</sup></sup></a> https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref28" name="_ftn28"><sup><sup>[28]</sup></sup></a> https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref29" name="_ftn29"><sup><sup>[29]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref30" name="_ftn30"><sup><sup>[30]</sup></sup></a> https://www.artificial-intelligence.blog/news/capsule-networks</p>
<p style="text-align: justify; "><a href="#_ftnref31" name="_ftn31"><sup><sup>[31]</sup></sup></a> http://raird.no/about/factsheet.html</p>
<p style="text-align: justify; "><a href="#_ftnref32" name="_ftn32"><sup><sup>[32]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref33" name="_ftn33"><sup><sup>[33]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref34" name="_ftn34"><sup><sup>[34]</sup></sup></a> https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime</p>
<p style="text-align: justify; "><a href="#_ftnref35" name="_ftn35"><sup><sup>[35]</sup></sup></a> <i>R C Cooper</i> v. <i>Union of India</i>, 1970 SCR (3) 530.</p>
<p style="text-align: justify; "><a href="#_ftnref36" name="_ftn36"><sup><sup>[36]</sup></sup></a> <i>Maneka Gandhi</i> v. <i>Union of India</i>, 1978 SCR (2) 621.</p>
<p style="text-align: justify; "><a href="#_ftnref37" name="_ftn37"><sup><sup>[37]</sup></sup></a> 94 US 113 (1877).</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india'>https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india</a>
</p>
No publisherAmber Sinha and Elonnai HickokInternet GovernanceArtificial IntelligencePrivacy2018-09-03T13:29:12ZBlog EntryThe rise of AI in Indian healthcare industry: An innovative asset to the rescue
https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry
<b>The use of Artificial Intelligence (AI) is rapidly increasing with the growth of start-ups and large Information and Communications Technology (ICT) companies that offer AI healthcare solutions for healthcare challenges in India.</b>
<p class="clearfix" style="text-align: justify; ">The blog post was published by <a class="external-link" href="https://mediaindia.eu/digital/the-rise-of-ai-in-indian-healthcare-industry/">Media India Group</a> on June 27, 2018. CIS research was quoted.</p>
<hr />
<p class="clearfix" style="text-align: justify; ">There is an uneven ratio of skilled doctors to patients in our country. According to the Indian Journal of Public Health (2017 edition), India had 4.8 practicing doctors per 10,000 population. It is expected to grow to 6.9 per 10,000 people by the year 2030, but the minimum doctor to patient ratio recommended by the World Health Organisation (WHO) is 1:1000. AI is an effective measure to tackle challenges like the uneven ratio, making doctors more skilled at their jobs, catering to rural areas for a high-quality healthcare, training doctors and nurses to tackle complex procedures.</p>
<p class="clearfix" style="text-align: justify; "><b>How does AI in healthcare function?</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector is a range of technologies that enable machines to sense, comprehend, act and learn so that they can carry out administrative and healthcare functions, be used in research and for training purposes. Some of the technologies included in the healthcare sector are natural language processing, intelligent agents, computer vision, machine learning, chatbots, voice recognition etc. These technologies can be adopted at varying levels across the healthcare ecosystem. Machine learning can be used to merge an individual’s omic (genomic, proteomic, metabolic) data with other data sources to predict the probability of developing a disease, which can then be addressed through timely intercessions such as preventative therapy.</p>
<p class="clearfix" style="text-align: justify; "><b>AI in the healthcare sector in India</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector in India is potentially developing. According to a report by the CIS India published earlier this year, AI could help add USD 957 billion to the Indian economy by 2035. Of the USD 5.5 billion that was raised by global digital healthcare companies in July-September 2017 quarter, at least 16 Indian Healthcare IT companies received funding, the report said. State governments are also providing support to AI start-ups.</p>
<p class="clearfix" style="text-align: justify; ">AI is capable of solving various healthcare challenges in India. The technological innovation is proving to be beneficial in diagnosis procedure, monitoring of chronic conditions, assisting in robotic surgery, drug discovery etc. Among several companies that are exploring various uses of AI in the healthcare segment, Microsoft is taking a major initiative along with Apollo and other hospitals to expand its use in several segments like cardiology, eye-care, diseases like Tuberculosis, HIV etc.</p>
<p class="clearfix" style="text-align: justify; ">Healthcare start-ups are majorly engaging themselves in the use of Artificial Intelligence.</p>
<p class="clearfix" style="text-align: justify; ">A list of six healthcare start-ups that are using Artificial Intelligence in India:</p>
<ol style="text-align: justify; ">
<li>Niramai, a Bengaluru-based start-up founded in the year 2016, is using AI for pain-free breast cancer screening.</li>
<li>MUrgency, a Mumbai-based healthcare mobile application is helping people connect in need of medical emergency responses with qualified medical, safety, rescue and assistance professionals.</li>
<li>Advancells, a Noida-based start-up provides stem cell therapy, also known as regenerative therapy, has a large potential in the field of organ transplantation.</li>
<li>Portea, a Bengaluru-based start-up offers home visits from doctors, nurses, physiotherapists and technicians for patients. Patients who are unable to visit hospitals can receive assistance from doctors and medical professionals using remote diagnostics and monitoring equipments, point-of-care devices.</li>
<li>AddressHealth, a Bengaluru-based start-up provides primary pediatric healthcare services to school children where they are screened for hearing, vision, dental health, anthropometry, alongside a medical competition.</li>
<li>LiveHealth, a Pune-based start-up works as a management information system (MIS) for healthcare providers. It collects samples, manages patient records, diagnoses them and generates reports.</li>
</ol>
<p class="clearfix" style="text-align: justify; ">Artificial Intelligence, the next-gen innovative thing will act as an “invisible hand” in revolutionising the healthcare sector and is expected to grow in India to USD 372 billion by 2022.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry'>https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-06T02:40:50ZNews ItemThe AI Task Force Report - The first steps towards India’s AI framework
https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework
<b>The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.</b>
<p style="text-align: justify; ">The blog post was edited by Swagam Dasgupta. <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-task-force-report.pdf">Download <strong>PDF</strong> here</a></p>
<hr />
<p><span style="text-align: justify; ">The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors</span><a name="_ftnref1"></a><span style="text-align: justify; "> and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver,</span><a name="_ftnref2"></a><span style="text-align: justify; "> the Report attempts to answer three policy questions:</span></p>
<ul>
<li>What are the areas where government should play a role?</li>
<li>How can AI improve quality of life and solve problems at scale for Indian citizens?</li>
<li>What are the sectors that can generate employment and growth by the use of AI technology?</li>
</ul>
<p><span style="text-align: justify; ">This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.</span></p>
<h3><span>Sectors of Relevance and Challenges</span></h3>
<p style="text-align: justify; ">In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education.<a name="_ftnref3"></a> While these ten domains are part of the 16 domains of focus listed in the AITF’s web page,<a name="_ftnref4"></a> it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.<a name="_ftnref5"></a></p>
<p style="text-align: justify; "><span>Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data.</span><a name="_ftnref6"></a><span> The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges.</span><a name="_ftnref7"></a><span> The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.</span><a name="_ftnref8"></a></p>
<p style="text-align: justify; ">Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies.<a name="_ftnref9"></a> In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.<a name="_ftnref10"></a></p>
<p style="text-align: justify; ">On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards.<a name="_ftnref11"></a> In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.</p>
<p style="text-align: justify; ">The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.<a name="_ftnref12"></a></p>
<p style="text-align: justify; ">Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived.<a name="_ftnref13"></a> Therefore, AI could have a large positive impact through inclusive growth and empowerment.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics.<a name="_ftnref14"></a> Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth.<a name="_ftnref15"></a> The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.</p>
<h3>Areas in need of Government Intervention</h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion.<a name="_ftnref16"></a> While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.</p>
<p style="text-align: justify; "><strong>Participation in International Forums</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.</p>
<h3><span>Key Enablers to AI</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI.<a name="_ftnref17"></a> Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Data and Data Marketplaces</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court,<a name="_ftnref18"></a> the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.<a name="_ftnref19"></a></p>
<p style="text-align: justify; "><span>This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it.</span><a name="_ftnref20"></a><span> It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.</span></p>
<p style="text-align: justify; "><span><strong>Innovation and Patents</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes.<a name="_ftnref21"></a> If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted.<a name="_ftnref22"></a> Given the controversies over CRIs that have dotted the Indian patent landscape<a name="_ftnref23"></a>, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.</p>
<h3><span>Ethical AI framework</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Responsible AI</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Privacy and Data</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also recognises the significance of the implementation of the Aadhaar Act<a name="_ftnref24"></a>, the privacy judgement<a name="_ftnref25"></a> and the proposed data protection laws<a name="_ftnref26"></a>, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data.<a name="_ftnref27"></a> Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog,<a name="_ftnref28"></a> this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI.<a name="_ftnref29"></a> Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.<a name="_ftnref30"></a></p>
<p style="text-align: justify; "><span><strong>Liability</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.</p>
<p style="text-align: justify; "><strong>AI and Employment </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk<a name="_ftnref31"></a>, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes.<a name="_ftnref32"></a> Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.</p>
<p style="text-align: justify; "><strong>Education and Re-skilling</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component.<a name="_ftnref33"></a> Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.</p>
<p style="text-align: justify; "><strong>Policy Recommendations </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Task Force<sub>,</sub> in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.</p>
<p style="text-align: justify; "><strong>National Artificial Intelligence Mission</strong></p>
<p style="text-align: justify; ">The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups.<a name="_ftnref34"></a> The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment.<a name="_ftnref35"></a> Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge<a name="_ftnref36"></a> and Japan’s sandbox for self driving trucks.<a name="_ftnref37"></a> The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.<a name="_ftnref38"></a>Additionally, government engagement in AI projects has thus far been fragmented<a name="_ftnref39"></a>and a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition<a name="_ftnref40"></a> that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.</p>
<p style="text-align: justify; "><strong>Other recommendations</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.”<a name="_ftnref41"></a> Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.</p>
<p style="text-align: justify; ">The recommendations propose establishing operation standards for data storage and privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.</p>
<p style="text-align: justify; ">Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.</p>
<p style="text-align: justify; ">Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’<a name="_ftnref42"></a> needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.</p>
<p style="text-align: justify; "><strong>Conclusion</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The inconspicuous release of the Report as well as the lack of a call for public comments<a name="_ftnref43"></a> results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.</p>
<p style="text-align: justify; "><span>Compared to the recent Niti Aayog Discussion Paper</span><a name="_ftnref44"></a><span>, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.</span></p>
<p> </p>
<p style="text-align: justify; "><a name="_ftn1"></a><span> The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. </span><a href="https://www.aitf.org.in/">https://www.aitf.org.in//</a><span> There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women</span></p>
<p style="text-align: justify; "><a name="_ftn2"></a> The Report on the Artificial Intelligence Task Force, Pg. 1,<span>http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf</span></p>
<p style="text-align: justify; "><a name="_ftn3"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn4"></a> The Artificial Intelligence Task Force https://www.aitf.org.in/</p>
<p style="text-align: justify; "><a name="_ftn5"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn6"></a> The Report on the Artificial Intelligence Task Force, Pg. 9,10.</p>
<p style="text-align: justify; "><a name="_ftn7"></a> The Report on the Artificial Intelligence Task Force, Pg. 9</p>
<p style="text-align: justify; "><a name="_ftn8"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn9"></a> Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report</p>
<p style="text-align: justify; "><a name="_ftn10"></a>Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report _02.pdf</p>
<p style="text-align: justify; "><a name="_ftn11"></a> The Report on the Artificial Intelligence Task Force, Pg. 21.</p>
<p style="text-align: justify; "><a name="_ftn12"></a> Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission</p>
<p style="text-align: justify; "><a name="_ftn13"></a> The Report on the Artificial Intelligence Task Force, Pg. 22</p>
<p style="text-align: justify; "><a name="_ftn14"></a> Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018</p>
<p style="text-align: justify; "><a name="_ftn15"></a> Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/</p>
<p style="text-align: justify; "><a name="_ftn16"></a>It is not just technical experts that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.</p>
<p style="text-align: justify; "><a name="_ftn17"></a> The Report on the Artificial Intelligence Task Force, Pg. 31</p>
<p style="text-align: justify; "><a name="_ftn18"></a>Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece</p>
<p style="text-align: justify; "><a name="_ftn19"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn20"></a> CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf</p>
<p style="text-align: justify; "><a name="_ftn21"></a> The Report on the Artificial Intelligence Task Force, Pg. 30</p>
<p style="text-align: justify; "><a name="_ftn22"></a> Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.</p>
<p style="text-align: justify; "><a name="_ftn23"></a>Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement</p>
<p style="text-align: justify; ">https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html</p>
<p style="text-align: justify; "><a name="_ftn24"></a> The Report on the Artificial Intelligence Task Force, Pg. 37</p>
<p style="text-align: justify; "><a name="_ftn25"></a>The Report on the Artificial Intelligence Task Force, Pg. 7</p>
<p style="text-align: justify; "><a name="_ftn26"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn27"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn28"></a> National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p style="text-align: justify; "><a name="_ftn29"></a> Meaningful information and the right to explanation,Andrew D Selbst Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242</p>
<p style="text-align: justify; "><a name="_ftn30"></a> The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data</p>
<p style="text-align: justify; "><a name="_ftn31"></a> M-Turk https://www.mturk.com/</p>
<p style="text-align: justify; "><a name="_ftn32"></a> For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/</p>
<p style="text-align: justify; "><a name="_ftn33"></a> The Report on the Artificial Intelligence Task Force, Pg. 41</p>
<p style="text-align: justify; "><a name="_ftn34"></a> Report of Artificial Intelligence Task Force Pg, 46, 47</p>
<p style="text-align: justify; "><a name="_ftn35"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn36"></a>The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge</p>
<p style="text-align: justify; "><a name="_ftn37"></a>Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/</p>
<p style="text-align: justify; "><a name="_ftn38"></a> Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.</p>
<p style="text-align: justify; "><a name="_ftn39"></a><a href="https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977">https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977</a>,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/</p>
<p style="text-align: justify; "><a name="_ftn40"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn41"></a> Report of Artificial Intelligence Task Force Pg. 49</p>
<p style="text-align: justify; "><a name="_ftn42"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn43"></a> The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.</p>
<p style="text-align: justify; "><a name="_ftn44"></a>National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework'>https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework</a>
</p>
No publisherElonnai Hickok, Shweta Mohandas and Swaraj Paul BarooahInternet GovernanceArtificial IntelligencePrivacy2018-06-27T14:32:56ZBlog EntryTechnology Foresight Group Tandem Research's AI policy lab on the theme AI and Environment
https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment
<b>Shweta Mohandas attended a roundtable discussion on artificial intelligence and environment held at Tandem Research's office in Goa on October 5, 2018. She also made the framing intervention for the first session by addressing the question - What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?</b>
<dl style="text-align: justify; ">
<p>Conversations at the lab clustered around four main themes:</p>
<p><b>AI in the Anthropocene</b><br />What are the most critical sustainability challenges in India – and can AI be useful in addressing them? What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?<br /><br /><b>Conservation after nature</b><br />What AI interventions are possible to foster better conservation and can AI driven citizen science initiatives improve people’s relationship with the natural world? Can AI help imagine a more dynamic and proximate co-existence with other species, after nature?<br /><br /><b>Water ecosystems</b><br />Can AI help us imagine new paradigm of water control and infrastructure that are more dynamic and ‘mirror’ the complexity of natural water systems? Will AI lead to decentralization and empowerment of water users or will it result in centralized models and loss of power and agency of water users?<br /><br /><b>Future Cities</b><br />Can AI systems be used to foster sustainability practices around mobility, energy, waste, and help better plan development zones and create early warning systems? What systems can be built to encourage citizen participation for solving sustainability problems and increase transparency and accountability of municipal governments?</p>
</dl>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment'>https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-10-31T01:10:34ZNews ItemTalks at National University of Juridical Sciences Today
https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today
<b>Arindrajit Basu delivered two lectures at the National University of Juridical Sciences on September 18, 2019. </b>
<p style="text-align: justify; ">The first one was part of a symposium being conducted by the soon to be set up Intellectual Property and Technology Law Centre. I spoke on "Conceptualising India's Digital Policy Vision" The other speaker today was Mr. Supratim Chakraborty (Partner, Khaitan&Co.) Tomorrow's speakers are Prof. Mahendra Kumar Bhandan and Nikhil Narendran (Partner, Trilegal)</p>
<p style="text-align: justify; "><b>Abstract</b></p>
<p style="text-align: justify; ">The past year has seen vigorous activity on the domestic data governance policy front in India. Across key issues including intermediary liability, data localisation and e-commerce, the government has rolled out a patchwork of regulatory policies that has resulted in battle lines being drawn by governments, industry and civil society actors both in India and across the globe. The Data Protection Bill is set to be tabled in the next session of Parliament amidst supposed disagreement among policy-makers on key provisions, including data localization. The draft e-commerce policy and Chapter 4 of the Economic Survey refer to the concepts of ‘community data’ and ‘data as public good’ respectively. Artifiicial Intelligence is also the new buzz word among policy-making circles and industry players alike.<br /><br />The implementation of each of these concepts have important implications for individual privacy, the monetisation of data by (foreign tech companies) and the harnessing of-as the e-commerce policy puts it-India’s data for India’s development. Meanwhile, at international forums such as the G20, India has partnered up with its BRICS allies to emphasize the notion of ‘data sovereignty’ or the right of each country to govern data within its jurisdiction without external interference.<br />In his talk, Basu unpacked each of these policies and followed up with a discussion on what these developments meant for Indian citizens and for India’s role in the multilateral global order.</p>
<p style="text-align: justify; ">The second one was on 'Constitutionalizing Artificial Intelligence' conducted by the Constitutional Law Society. Here, I drew from some preliminary findings from a paper I am working on with Elonnai and Amber.</p>
<p style="text-align: justify; "><b>Abstract</b></p>
<p style="text-align: justify; ">The use of big data and algorithmic decision-making has been touted world over as a means of augmenting human capacities, removing bureaucratic fetters and benefiting society. Yet, with concerns arising around bias, fairness and a lack of algorithmic accountability, an entirely new domain of discourse on data justice has emerged - underscoring the idea that algorithms not only have the potential to exacerbate entrenched structural inequality but could also create and modulate new forms of injustice for the vulnerable sections of society.</p>
<p style="text-align: justify; "><span>There is a need for a reflexive turn in the debate on data justice that adequately considers the broader narrative and entrenched inequality in the ecosystem. </span><span>Transformative constitutionalism is a new brand of scholarship in comparative constitutional law which celebrates the crucial role of the state and the judiciary in bringing about emancipatory change and rooting out structural inequality.</span></p>
<p style="text-align: justify; ">Originally conceptualized as a Global South concept designed as a counter-model to the individual rights-driven model of Northern Constitutions, scholars have now identified emancipatory provisions in several western constitutions such as Germany. India’s constitution is one such example. The origins of constitutional order in India were designed to “bring the alien and powerful machine like that of the state under the control of human will” and to eliminate the inequality of “status, facilities and opportunities.” <br /><br />What is the relevance of India's constitutional ethos in the regulation of modern day data driven decision-making? How can policy-makers use constitutional tenets to mitigate structural injustice and transform the bearings of 21st century Indian society?</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today'>https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today</a>
</p>
No publisherAdminIndustry 4.0Internet GovernanceArtificial Intelligence2019-09-20T14:45:35ZNews ItemSpeculative Futures Lab on Artificial Intelligence in Media, Entertainment, and Gaming
https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming
<b>Pranav Manjesh Bidare attended the event organised by Quicksand between November 16 and 18, 2018 in Bangalore as a panelist.</b>
<p style="text-align: justify; ">Pranav was a panelist in the session discussing "Ethics of AI in the Creative spaces" on November 17, alongside Urvashi Aneja, and Abishek Reddy from Tandem Research. For more info <a class="external-link" href="http://cis-india.org/internet-governance/files/Quicksand%20AI%20Futures%20Lab.pdf">see this</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming'>https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-05T03:12:58ZNews ItemSociety 5.0 and Artificial Intelligence with a Human Face
https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face
<b>On 10 May 2019 Radhika Radhakrishnan attended a stakeholder's roundtable consultation on "Society 5.0 and Artificial Intelligence with a Human Face", organized by the Indian Council for Research on International Economic Relations (ICRIER) at India Habitat Centre, New Delhi. The event aimed to chart a roadmap for India’s participation at the G20, under the Japanese Presidency.</b>
<p style="text-align: justify; ">The agenda can be <a class="external-link" href="http://icrier.org/newsevents/seminar-details/?sid=460">found here</a>. Radhika's inputs were primarily focused on the feminist and gender implications of publicly deployed AI models, challenges and opportunities for academic AI-focused research in the Global South, recommendations for AI capacity building and skilling in the Global South, and regulation of black-box AI.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face'>https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-05-14T14:51:56ZNews Item