The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 51 to 65.
The Srikrishna Committee Data Protection Bill and Artificial Intelligence in India
https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india
<b>Artificial Intelligence in many ways is in direct conflict with traditional data protection principles and requirements including consent, purpose limitation, data minimization, retention and deletion, accountability, and transparency.</b>
<h3 style="text-align: justify; ">Privacy Considerations in AI</h3>
<p style="text-align: justify; ">Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a></p>
<p style="text-align: justify; ">The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems<a href="#_ftn2" name="_ftnref2"><sup><sup>[2]</sup></sup></a> have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc. have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.<a href="#_ftn3" name="_ftnref3"><sup><sup>[3]</sup></sup></a></p>
<p style="text-align: justify; ">India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.</p>
<p style="text-align: justify; ">In this article we attempt to analyse if and how the Srikrishna committee draft Bill and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.</p>
<h3 style="text-align: justify; ">The GDPR and Artificial Intelligence</h3>
<p style="text-align: justify; ">The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.<a href="#_ftn4" name="_ftnref4"><sup><sup>[4]</sup></sup></a> Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:</p>
<ol style="text-align: justify; ">
<li><b>Data rights: </b> The GDPR enables individuals with a number of data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling. The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them. There are three exceptions to this right - if the automated decision making is: a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.<a href="#_ftn5" name="_ftnref5"><sup><sup>[5]</sup></sup></a> </li>
<li><b>Transparency:</b> Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.<a href="#_ftn6" name="_ftnref6"><sup><sup>[6]</sup></sup></a> Importantly, this requirement has the potential of ensuring that companies do not operate complete ‘black box’ algorithms within their business processes.</li>
<li><b>Fairness: </b>The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and ensuring that processing that does not result in negative discriminatory results.<a href="#_ftn7" name="_ftnref7"><sup><sup>[7]</sup></sup></a> </li>
<li><b>Purpose Limitation:</b> The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes. Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.<a href="#_ftn8" name="_ftnref8"><sup><sup>[8]</sup></sup></a> </li>
<li><b>Privacy by Design and Default:</b> Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.<a href="#_ftn9" name="_ftnref9"><sup><sup>[9]</sup></sup></a></li>
<li><b>Data Protection Impact Assessments:</b> Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.<a href="#_ftn10" name="_ftnref10"><sup><sup>[10]</sup></sup></a></li>
<li><b>Security:</b> Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization. </li>
</ol>
<h3 style="text-align: justify; ">Srikrishna Committee Bill and AI</h3>
<p style="text-align: justify; ">The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:</p>
<ol style="text-align: justify; ">
<li><b>Definition of Harm:</b> The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.</li>
</ol>
<p style="text-align: justify; ">Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Rights:</b> Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,<a href="#_ftn11" name="_ftnref11"><sup><sup>[11]</sup></sup></a> the right to opt out of automated decision making<a href="#_ftn12" name="_ftnref12"><sup><sup>[12]</sup></sup></a>, and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.<a href="#_ftn13" name="_ftnref13"><sup><sup>[13]</sup></sup></a> As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Quality</b>: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.<a href="#_ftn14" name="_ftnref14"><sup><sup>[14]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Principle of Privacy by Design:</b> Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">A number of these (a, d, e, and g) require that the interest of the data principal is accounted for throughout the processing of personal data, This will be significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.<a href="#_ftn15" name="_ftnref15"><sup><sup>[15]</sup></sup></a> If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Data Protection Impact Assessment:</b> Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely. This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.<a href="#_ftn16" name="_ftnref16"><sup><sup>[16]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Classification of data fiduciaries as significant data fiduciaries</b>: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.<a href="#_ftn17" name="_ftnref17"><sup><sup>[17]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Restrictions on cross border transfer of personal data: </b>Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">It is interesting to note that in the context of cross border sharing of data, the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.<a href="#_ftn18" name="_ftnref18"><sup><sup>[18]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Powers and functions of the Authority</b>: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.<a href="#_ftn19" name="_ftnref19"><sup><sup>[19]</sup></sup></a></p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Fair and reasonable processing: </b>Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.<a href="#_ftn20" name="_ftnref20"><sup><sup>[20]</sup></sup></a></li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.</p>
<ol style="text-align: justify; "> </ol>
<ul style="text-align: justify; ">
<li><b>Purpose Limitation: </b>Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.</li>
</ul>
<ol style="text-align: justify; "> </ol>
<p style="text-align: justify; ">As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,<a href="#_ftn21" name="_ftnref21"><sup><sup>[21]</sup></sup></a> and instead creates an exception for research, archiving, or statistical purposes.<a href="#_ftn22" name="_ftnref22"><sup><sup>[22]</sup></sup></a> The DPA has the responsibility of developing codes defining research purposes under the act.<a href="#_ftn23" name="_ftnref23"><sup><sup>[23]</sup></sup></a></p>
<ol style="text-align: justify; ">
<li><b>Security Safeguards:</b> Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.<a href="#_ftn24" name="_ftnref24"><sup><sup>[24]</sup></sup></a></li>
</ol>
<p style="text-align: justify; ">Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna uses Bill uses term de-identification. The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. <a href="#_ftn25" name="_ftnref25"><sup><sup>[25]</sup></sup></a></p>
<h3 style="text-align: justify; ">Technical perspectives of Privacy and AI</h3>
<p style="text-align: justify; ">There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws<a href="#_ftn26" name="_ftnref26"><sup><sup>[26]</sup></sup></a>, as well as research into AI privacy enhancing technologies.<a href="#_ftn27" name="_ftnref27"><sup><sup>[27]</sup></sup></a> Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.<a href="#_ftn28" name="_ftnref28"><sup><sup>[28]</sup></sup></a> . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority<a href="#_ftn29" name="_ftnref29"><sup><sup>[29]</sup></sup></a> break such methods into three categories:</p>
<ol style="text-align: justify; ">
<li>Techniques for reducing the need for large amounts of training data: Such techniques can include</li>
<ol>
<li><b>Generative adversarial networks (GANs):</b> GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.</li>
<li><b>Federated Learning:</b> Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the centralized model.</li>
<li><b>Matrix Capsules</b>: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.<a href="#_ftn30" name="_ftnref30"><sup><sup>[30]</sup></sup></a></li>
</ol>
<li>Techniques that uphold data protection without reducing the basic data set</li>
<ol>
<li><b>Differential Privacy</b>: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.</li>
<li><b>Homomorphic Encryption:</b> Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes</li>
<li><b>Transfer Learning</b>: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed. </li>
<li><b>RAIRD</b>: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.<a href="#_ftn31" name="_ftnref31"><sup><sup>[31]</sup></sup></a></li>
</ol>
<li>Techniques to move beyond opaque algorithms</li>
<ol>
<li><b>Explainable AI (XAI): </b>DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.<a href="#_ftn32" name="_ftnref32"><sup><sup>[32]</sup></sup></a> DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.<a href="#_ftn33" name="_ftnref33"><sup><sup>[33]</sup></sup></a></li>
<li><b>Local Interpretable Model Agnostic Explanations</b>: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.<a href="#_ftn34" name="_ftnref34"><sup><sup>[34]</sup></sup></a></li>
</ol> </ol>
<h3 style="text-align: justify; ">Public Sector use of AI and Privacy</h3>
<p style="text-align: justify; ">The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.</p>
<p style="text-align: justify; ">Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of <i>AK Gopalan</i> there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through <i>RC Cooper</i><a href="#_ftn35" name="_ftnref35"><sup><sup>[35]</sup></sup></a> and <i>Maneka</i><a href="#_ftn36" name="_ftnref36"><sup><sup>[36]</sup></sup></a>, the Supreme Court took steps to reverse the majority opinion in <i>Gopalan</i> and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in <i>Kharak Singh</i> where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in <i>Kharak Singh</i> draws heavily from<i> Munn</i> v. <i>Illinois</i><a href="#_ftn37" name="_ftnref37"><sup><sup>[37]</sup></sup></a> which held life to be “more than mere animal existence.” Curiously, after taking this position <i>Kharak Singh</i> fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in <i>Kharak Singh</i> was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. <i>Maneka</i> which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.</p>
<p style="text-align: justify; ">The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.</p>
<p style="text-align: justify; ">While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.</p>
<h3 style="text-align: justify; ">Conclusion</h3>
<p style="text-align: justify; ">The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.</p>
<p style="text-align: justify; ">Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.</p>
<p style="text-align: justify; ">One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; "><a href="#_ftnref1" name="_ftn1"><sup><sup>[1]</sup></sup></a> https://privacyinternational.org/topics/artificial-intelligence</p>
<p style="text-align: justify; "><a href="#_ftnref2" name="_ftn2"><sup><sup>[2]</sup></sup></a> https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/</p>
<p style="text-align: justify; "><a href="#_ftnref3" name="_ftn3"><sup><sup>[3]</sup></sup></a> https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/</p>
<p style="text-align: justify; "><a href="#_ftnref4" name="_ftn4"><sup><sup>[4]</sup></sup></a> https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref5" name="_ftn5"><sup><sup>[5]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref6" name="_ftn6"><sup><sup>[6]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref7" name="_ftn7"><sup><sup>[7]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref8" name="_ftn8"><sup><sup>[8]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref9" name="_ftn9"><sup><sup>[9]</sup></sup></a> https://gdpr-info.eu/art-25-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref10" name="_ftn10"><sup><sup>[10]</sup></sup></a> https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/</p>
<p style="text-align: justify; "><a href="#_ftnref11" name="_ftn11"><sup><sup>[11]</sup></sup></a> https://gdpr-info.eu/art-21-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref12" name="_ftn12"><sup><sup>[12]</sup></sup></a> https://gdpr-info.eu/art-22-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref13" name="_ftn13"><sup><sup>[13]</sup></sup></a> https://gdpr-info.eu/art-14-gdpr/</p>
<p style="text-align: justify; "><a href="#_ftnref14" name="_ftn14"><sup><sup>[14]</sup></sup></a>Draft Data Protection Bill 2018 - Chapter II section 9</p>
<p style="text-align: justify; "><a href="#_ftnref15" name="_ftn15"><sup><sup>[15]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 29</p>
<p style="text-align: justify; "><a href="#_ftnref16" name="_ftn16"><sup><sup>[16]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 33</p>
<p style="text-align: justify; "><a href="#_ftnref17" name="_ftn17"><sup><sup>[17]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 38</p>
<p style="text-align: justify; "><a href="#_ftnref18" name="_ftn18"><sup><sup>[18]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VIII section 40</p>
<p style="text-align: justify; "><a href="#_ftnref19" name="_ftn19"><sup><sup>[19]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter X section 60</p>
<p style="text-align: justify; "><a href="#_ftnref20" name="_ftn20"><sup><sup>[20]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 4</p>
<p style="text-align: justify; "><a href="#_ftnref21" name="_ftn21"><sup><sup>[21]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter II section 5</p>
<p style="text-align: justify; "><a href="#_ftnref22" name="_ftn22"><sup><sup>[22]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter IX Section 45</p>
<p style="text-align: justify; "><a href="#_ftnref23" name="_ftn23"><sup><sup>[23]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter XIV section 97</p>
<p style="text-align: justify; "><a href="#_ftnref24" name="_ftn24"><sup><sup>[24]</sup></sup></a> Draft Data Protection Bill 2018 - Chapter VII section 31</p>
<p style="text-align: justify; "><a href="#_ftnref25" name="_ftn25"><sup><sup>[25]</sup></sup></a> Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref26" name="_ftn26"><sup><sup>[26]</sup></sup></a> https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref27" name="_ftn27"><sup><sup>[27]</sup></sup></a> https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref28" name="_ftn28"><sup><sup>[28]</sup></sup></a> https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref29" name="_ftn29"><sup><sup>[29]</sup></sup></a> https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref30" name="_ftn30"><sup><sup>[30]</sup></sup></a> https://www.artificial-intelligence.blog/news/capsule-networks</p>
<p style="text-align: justify; "><a href="#_ftnref31" name="_ftn31"><sup><sup>[31]</sup></sup></a> http://raird.no/about/factsheet.html</p>
<p style="text-align: justify; "><a href="#_ftnref32" name="_ftn32"><sup><sup>[32]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref33" name="_ftn33"><sup><sup>[33]</sup></sup></a> https://www.darpa.mil/attachments/XAIProgramUpdate.pdf</p>
<p style="text-align: justify; "><a href="#_ftnref34" name="_ftn34"><sup><sup>[34]</sup></sup></a> https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime</p>
<p style="text-align: justify; "><a href="#_ftnref35" name="_ftn35"><sup><sup>[35]</sup></sup></a> <i>R C Cooper</i> v. <i>Union of India</i>, 1970 SCR (3) 530.</p>
<p style="text-align: justify; "><a href="#_ftnref36" name="_ftn36"><sup><sup>[36]</sup></sup></a> <i>Maneka Gandhi</i> v. <i>Union of India</i>, 1978 SCR (2) 621.</p>
<p style="text-align: justify; "><a href="#_ftnref37" name="_ftn37"><sup><sup>[37]</sup></sup></a> 94 US 113 (1877).</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india'>https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india</a>
</p>
No publisherAmber Sinha and Elonnai HickokInternet GovernanceArtificial IntelligencePrivacy2018-09-03T13:29:12ZBlog EntryUNESCAP Google AI Meeting
https://cis-india.org/internet-governance/news/unescap-google-ai-meeting
<b>Arindrajit was a panelist at the event on AI in public service delivery hosted by UNESCAP Bangkok on August 29, 2018. The event was co-organized by Economic and Social Commission for Asia and the Pacific and Google.</b>
<p style="text-align: justify; ">The discussion centered around the two questions (1) Is AI different from other technological advancements in the past and (2) Recommendations for policy-makers to enhance AI in Public Service Delivery.The other panelists were Dr. Urs Gasser (Berkman), Vidushi Marda ( Art.19), Malavika Jayaram (Digital Asia Hub) and Jake Lucchi ( Google) The panel was a platform to discuss some of our findings in our case studies on healthcare and agriculture, which we will receive comments on and will get published in November.<br /><br /></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/unescap-google-ai-meeting'>https://cis-india.org/internet-governance/news/unescap-google-ai-meeting</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-09-20T15:47:42ZNews ItemUNDP joins Tech Giants in Partnership on AI
https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai
<b>UNDP joins the Partnership on Artificial Intelligence (AI), a consortium of companies, academics, and NGOs working to ensure that AI is developed in a safe, ethical, and transparent manner. Founded in 2016 by the tech giants - Amazon, DeepMind/Google, Facebook, IBM, and Microsoft - It has since been joined by industry leaders such as Accenture, Intel, Oxford Internet Institute - University of Oxford, eBay, as well as non profit organizations such as UNICEF and Human Rights Watch and many more.</b>
<p style="text-align: justify; ">This was published by <a class="external-link" href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/undp-joins-tech-giants-in-partnership-on-ai.html">UNDP</a> on its website on August 1, 2018.</p>
<hr />
<p style="text-align: justify; ">Through the partnership, UNDP’s Innovation Facility will work with partners and communities to responsibly test and scale the use of AI to achieve the Sustainable Development Goals. By harnessing the power of data, we can inform risk, policy and program evaluation, we also can utilize robotics and Internet of Things (IoT) to collect data and reach the previously deemed unreachable - to leave no one behind.</p>
<p style="text-align: justify; ">UNDP’s AI portfolio is growing rapidly. Drones and remote sensing are used to improve data collection and inform decisions: in the Maldives for disaster preparedness, and in Uganda to engage refugee and host communities in jointly developing infrastructures. We partnered with IBM to automate <a href="http://www.undp.org/content/undp/en/home/blog/2018/ai-and-the-future-of-our-work.html">UNDP’s Rapid Integrated Assessment</a>, aligning national development plans and sectoral strategies with the 169 Sustainable Development Goals’ targets; and with the UNEP, UNDP has launched the <a href="http://www.undp.org/content/undp/en/home/news-centre/news/2018/un-biodiversity-lab-launched-to-revolutionize-biodiversity-plann.html">UN Biodiversity Lab</a>, powered by MapX. The spatial data platform will help countries support conservation efforts and accelerate delivery of the 2030 Agenda.</p>
<p style="text-align: justify; ">In line with UNDP’s Strategic Plan 2018-2021, innovation plays a central role in fulfilling the organization’s mission and achieving the Sustainable Development Goals. Benjamin Kumpf, UNDP’s Innovation Facility Lead states, “advances in robotics and AI have the potential to radically redefine human development pathways. The path to such redefinitions entails concrete AI experiments to increase the effectiveness of our work as well as norm-setting: we have to think beyond guidelines for ethical AI to designing accountability frameworks.”</p>
<p style="text-align: justify; ">The Partnership on AI aims to advance public understanding of AI, formulate best practices, and serve as an open platform for discussion and engagement about AI and its influences on people and society.</p>
<p style="text-align: justify; "><b>Full list of partners</b></p>
<p style="text-align: justify; ">Amazon, Apple, Deepmind, Facebook, Google, IBM, Microsoft, Aaai, ACLU, Accenture, Affectiva, Ai Forum New Zealand, Ai Now Institute, The Allen Institute For Artificial Intelligence (Ai2), Amnesty International, Article 19, Association For Computing Machinery, Center For Democracy & Technology (Cdt), Center For Human-compatible Artificial Intelligence, Center For Information Technology Policy Princeton University, Centre For Internet And Society, India (Cis), Leverhulme Centre For The Future of Intelligence (Cfi), Cogitai, Data & Society Research Institute, Digital Asia Hub, Doteveryone, Ebay, Element Ai, Electronic Frontier Foundation (Eff), Fraunhofer Iao, The Future of Humanity, Future of Life Institute, The Future of Privacy Forum, The Hastings Center, Hong Kong University of Science And Technology Department Of Electronic & Computer Engineering, Human Rights Watch, Intel, Markkula Center For Applied Ethics Santa Clara University, Mckinsey & Company, Nvidia, Omidyar Network Openai, Oxford Internet Institute - University of Oxford, Salesforce, SAP, Sony, Tufts University Hri Lab, UCL Engineering, UNDP, UNICEF, University of Washington Tech Policy Lab, Upturn, Xprize, Zalando</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai'>https://cis-india.org/internet-governance/news/undp-august-1-2018-undp-joins-tech-giants-in-partnership-on-ai</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-13T15:51:48ZNews ItemEthical Data Design Practices in the AI (Artificial Intelligence) Age
https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age
<b>Shweta Mohandas was a panelist at discussion on Ethical Data Design Practices in the AI (Artificial Intelligence) Age, organised by Startup Grind, Bangalore on July 28, 2018 at NUMA Bangalore. </b>
<h2>Agenda</h2>
<p><b>Ethical Data Design Practices in the Age</b></p>
<p dir="ltr" style="text-align: justify; ">The panel discussion is intended to explore the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.</p>
<p dir="ltr">Discussion centred around how to:</p>
<ul>
<li>Understand current thinking by the AI community on ethics and morality in computing and the challenges it presents. </li>
<li>Explore examples of the ethical choices that products make now and will make in the near future.</li>
<li>Learn how designers might approach designing experiences that face moral dilemmas.</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age'>https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-08-01T23:14:21ZNews ItemThe rise of AI in Indian healthcare industry: An innovative asset to the rescue
https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry
<b>The use of Artificial Intelligence (AI) is rapidly increasing with the growth of start-ups and large Information and Communications Technology (ICT) companies that offer AI healthcare solutions for healthcare challenges in India.</b>
<p class="clearfix" style="text-align: justify; ">The blog post was published by <a class="external-link" href="https://mediaindia.eu/digital/the-rise-of-ai-in-indian-healthcare-industry/">Media India Group</a> on June 27, 2018. CIS research was quoted.</p>
<hr />
<p class="clearfix" style="text-align: justify; ">There is an uneven ratio of skilled doctors to patients in our country. According to the Indian Journal of Public Health (2017 edition), India had 4.8 practicing doctors per 10,000 population. It is expected to grow to 6.9 per 10,000 people by the year 2030, but the minimum doctor to patient ratio recommended by the World Health Organisation (WHO) is 1:1000. AI is an effective measure to tackle challenges like the uneven ratio, making doctors more skilled at their jobs, catering to rural areas for a high-quality healthcare, training doctors and nurses to tackle complex procedures.</p>
<p class="clearfix" style="text-align: justify; "><b>How does AI in healthcare function?</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector is a range of technologies that enable machines to sense, comprehend, act and learn so that they can carry out administrative and healthcare functions, be used in research and for training purposes. Some of the technologies included in the healthcare sector are natural language processing, intelligent agents, computer vision, machine learning, chatbots, voice recognition etc. These technologies can be adopted at varying levels across the healthcare ecosystem. Machine learning can be used to merge an individual’s omic (genomic, proteomic, metabolic) data with other data sources to predict the probability of developing a disease, which can then be addressed through timely intercessions such as preventative therapy.</p>
<p class="clearfix" style="text-align: justify; "><b>AI in the healthcare sector in India</b></p>
<p class="clearfix" style="text-align: justify; ">AI in the healthcare sector in India is potentially developing. According to a report by the CIS India published earlier this year, AI could help add USD 957 billion to the Indian economy by 2035. Of the USD 5.5 billion that was raised by global digital healthcare companies in July-September 2017 quarter, at least 16 Indian Healthcare IT companies received funding, the report said. State governments are also providing support to AI start-ups.</p>
<p class="clearfix" style="text-align: justify; ">AI is capable of solving various healthcare challenges in India. The technological innovation is proving to be beneficial in diagnosis procedure, monitoring of chronic conditions, assisting in robotic surgery, drug discovery etc. Among several companies that are exploring various uses of AI in the healthcare segment, Microsoft is taking a major initiative along with Apollo and other hospitals to expand its use in several segments like cardiology, eye-care, diseases like Tuberculosis, HIV etc.</p>
<p class="clearfix" style="text-align: justify; ">Healthcare start-ups are majorly engaging themselves in the use of Artificial Intelligence.</p>
<p class="clearfix" style="text-align: justify; ">A list of six healthcare start-ups that are using Artificial Intelligence in India:</p>
<ol style="text-align: justify; ">
<li>Niramai, a Bengaluru-based start-up founded in the year 2016, is using AI for pain-free breast cancer screening.</li>
<li>MUrgency, a Mumbai-based healthcare mobile application is helping people connect in need of medical emergency responses with qualified medical, safety, rescue and assistance professionals.</li>
<li>Advancells, a Noida-based start-up provides stem cell therapy, also known as regenerative therapy, has a large potential in the field of organ transplantation.</li>
<li>Portea, a Bengaluru-based start-up offers home visits from doctors, nurses, physiotherapists and technicians for patients. Patients who are unable to visit hospitals can receive assistance from doctors and medical professionals using remote diagnostics and monitoring equipments, point-of-care devices.</li>
<li>AddressHealth, a Bengaluru-based start-up provides primary pediatric healthcare services to school children where they are screened for hearing, vision, dental health, anthropometry, alongside a medical competition.</li>
<li>LiveHealth, a Pune-based start-up works as a management information system (MIS) for healthcare providers. It collects samples, manages patient records, diagnoses them and generates reports.</li>
</ol>
<p class="clearfix" style="text-align: justify; ">Artificial Intelligence, the next-gen innovative thing will act as an “invisible hand” in revolutionising the healthcare sector and is expected to grow in India to USD 372 billion by 2022.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry'>https://cis-india.org/internet-governance/news/media-india-group-june-27-2018-binita-punwani-rise-of-ai-in-indian-healthcare-industry</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-08-06T02:40:50ZNews ItemThe AI Task Force Report - The first steps towards India’s AI framework
https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework
<b>The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.</b>
<p style="text-align: justify; ">The blog post was edited by Swagam Dasgupta. <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-task-force-report.pdf">Download <strong>PDF</strong> here</a></p>
<hr />
<p><span style="text-align: justify; ">The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors</span><a name="_ftnref1"></a><span style="text-align: justify; "> and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver,</span><a name="_ftnref2"></a><span style="text-align: justify; "> the Report attempts to answer three policy questions:</span></p>
<ul>
<li>What are the areas where government should play a role?</li>
<li>How can AI improve quality of life and solve problems at scale for Indian citizens?</li>
<li>What are the sectors that can generate employment and growth by the use of AI technology?</li>
</ul>
<p><span style="text-align: justify; ">This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.</span></p>
<h3><span>Sectors of Relevance and Challenges</span></h3>
<p style="text-align: justify; ">In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education.<a name="_ftnref3"></a> While these ten domains are part of the 16 domains of focus listed in the AITF’s web page,<a name="_ftnref4"></a> it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.<a name="_ftnref5"></a></p>
<p style="text-align: justify; "><span>Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data.</span><a name="_ftnref6"></a><span> The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges.</span><a name="_ftnref7"></a><span> The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.</span><a name="_ftnref8"></a></p>
<p style="text-align: justify; ">Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies.<a name="_ftnref9"></a> In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.<a name="_ftnref10"></a></p>
<p style="text-align: justify; ">On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards.<a name="_ftnref11"></a> In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.</p>
<p style="text-align: justify; ">The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.<a name="_ftnref12"></a></p>
<p style="text-align: justify; ">Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived.<a name="_ftnref13"></a> Therefore, AI could have a large positive impact through inclusive growth and empowerment.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics.<a name="_ftnref14"></a> Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth.<a name="_ftnref15"></a> The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.</p>
<h3>Areas in need of Government Intervention</h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion.<a name="_ftnref16"></a> While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.</p>
<p style="text-align: justify; "><strong>Participation in International Forums</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.</p>
<h3><span>Key Enablers to AI</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI.<a name="_ftnref17"></a> Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Data and Data Marketplaces</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court,<a name="_ftnref18"></a> the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.<a name="_ftnref19"></a></p>
<p style="text-align: justify; "><span>This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it.</span><a name="_ftnref20"></a><span> It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.</span></p>
<p style="text-align: justify; "><span><strong>Innovation and Patents</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes.<a name="_ftnref21"></a> If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted.<a name="_ftnref22"></a> Given the controversies over CRIs that have dotted the Indian patent landscape<a name="_ftnref23"></a>, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.</p>
<h3><span>Ethical AI framework</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Responsible AI</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Privacy and Data</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also recognises the significance of the implementation of the Aadhaar Act<a name="_ftnref24"></a>, the privacy judgement<a name="_ftnref25"></a> and the proposed data protection laws<a name="_ftnref26"></a>, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data.<a name="_ftnref27"></a> Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog,<a name="_ftnref28"></a> this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI.<a name="_ftnref29"></a> Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.<a name="_ftnref30"></a></p>
<p style="text-align: justify; "><span><strong>Liability</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.</p>
<p style="text-align: justify; "><strong>AI and Employment </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk<a name="_ftnref31"></a>, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes.<a name="_ftnref32"></a> Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.</p>
<p style="text-align: justify; "><strong>Education and Re-skilling</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component.<a name="_ftnref33"></a> Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.</p>
<p style="text-align: justify; "><strong>Policy Recommendations </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Task Force<sub>,</sub> in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.</p>
<p style="text-align: justify; "><strong>National Artificial Intelligence Mission</strong></p>
<p style="text-align: justify; ">The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups.<a name="_ftnref34"></a> The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment.<a name="_ftnref35"></a> Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge<a name="_ftnref36"></a> and Japan’s sandbox for self driving trucks.<a name="_ftnref37"></a> The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.<a name="_ftnref38"></a>Additionally, government engagement in AI projects has thus far been fragmented<a name="_ftnref39"></a>and a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition<a name="_ftnref40"></a> that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.</p>
<p style="text-align: justify; "><strong>Other recommendations</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.”<a name="_ftnref41"></a> Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.</p>
<p style="text-align: justify; ">The recommendations propose establishing operation standards for data storage and privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.</p>
<p style="text-align: justify; ">Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.</p>
<p style="text-align: justify; ">Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’<a name="_ftnref42"></a> needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.</p>
<p style="text-align: justify; "><strong>Conclusion</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The inconspicuous release of the Report as well as the lack of a call for public comments<a name="_ftnref43"></a> results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.</p>
<p style="text-align: justify; "><span>Compared to the recent Niti Aayog Discussion Paper</span><a name="_ftnref44"></a><span>, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.</span></p>
<p> </p>
<p style="text-align: justify; "><a name="_ftn1"></a><span> The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. </span><a href="https://www.aitf.org.in/">https://www.aitf.org.in//</a><span> There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women</span></p>
<p style="text-align: justify; "><a name="_ftn2"></a> The Report on the Artificial Intelligence Task Force, Pg. 1,<span>http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf</span></p>
<p style="text-align: justify; "><a name="_ftn3"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn4"></a> The Artificial Intelligence Task Force https://www.aitf.org.in/</p>
<p style="text-align: justify; "><a name="_ftn5"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn6"></a> The Report on the Artificial Intelligence Task Force, Pg. 9,10.</p>
<p style="text-align: justify; "><a name="_ftn7"></a> The Report on the Artificial Intelligence Task Force, Pg. 9</p>
<p style="text-align: justify; "><a name="_ftn8"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn9"></a> Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report</p>
<p style="text-align: justify; "><a name="_ftn10"></a>Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report _02.pdf</p>
<p style="text-align: justify; "><a name="_ftn11"></a> The Report on the Artificial Intelligence Task Force, Pg. 21.</p>
<p style="text-align: justify; "><a name="_ftn12"></a> Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission</p>
<p style="text-align: justify; "><a name="_ftn13"></a> The Report on the Artificial Intelligence Task Force, Pg. 22</p>
<p style="text-align: justify; "><a name="_ftn14"></a> Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018</p>
<p style="text-align: justify; "><a name="_ftn15"></a> Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/</p>
<p style="text-align: justify; "><a name="_ftn16"></a>It is not just technical experts that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.</p>
<p style="text-align: justify; "><a name="_ftn17"></a> The Report on the Artificial Intelligence Task Force, Pg. 31</p>
<p style="text-align: justify; "><a name="_ftn18"></a>Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece</p>
<p style="text-align: justify; "><a name="_ftn19"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn20"></a> CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf</p>
<p style="text-align: justify; "><a name="_ftn21"></a> The Report on the Artificial Intelligence Task Force, Pg. 30</p>
<p style="text-align: justify; "><a name="_ftn22"></a> Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.</p>
<p style="text-align: justify; "><a name="_ftn23"></a>Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement</p>
<p style="text-align: justify; ">https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html</p>
<p style="text-align: justify; "><a name="_ftn24"></a> The Report on the Artificial Intelligence Task Force, Pg. 37</p>
<p style="text-align: justify; "><a name="_ftn25"></a>The Report on the Artificial Intelligence Task Force, Pg. 7</p>
<p style="text-align: justify; "><a name="_ftn26"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn27"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn28"></a> National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p style="text-align: justify; "><a name="_ftn29"></a> Meaningful information and the right to explanation,Andrew D Selbst Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242</p>
<p style="text-align: justify; "><a name="_ftn30"></a> The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data</p>
<p style="text-align: justify; "><a name="_ftn31"></a> M-Turk https://www.mturk.com/</p>
<p style="text-align: justify; "><a name="_ftn32"></a> For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/</p>
<p style="text-align: justify; "><a name="_ftn33"></a> The Report on the Artificial Intelligence Task Force, Pg. 41</p>
<p style="text-align: justify; "><a name="_ftn34"></a> Report of Artificial Intelligence Task Force Pg, 46, 47</p>
<p style="text-align: justify; "><a name="_ftn35"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn36"></a>The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge</p>
<p style="text-align: justify; "><a name="_ftn37"></a>Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/</p>
<p style="text-align: justify; "><a name="_ftn38"></a> Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.</p>
<p style="text-align: justify; "><a name="_ftn39"></a><a href="https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977">https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977</a>,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/</p>
<p style="text-align: justify; "><a name="_ftn40"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn41"></a> Report of Artificial Intelligence Task Force Pg. 49</p>
<p style="text-align: justify; "><a name="_ftn42"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn43"></a> The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.</p>
<p style="text-align: justify; "><a name="_ftn44"></a>National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework'>https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework</a>
</p>
No publisherElonnai Hickok, Shweta Mohandas and Swaraj Paul BarooahInternet GovernanceArtificial IntelligencePrivacy2018-06-27T14:32:56ZBlog EntryNITI Aayog Discussion Paper: An aspirational step towards India’s AI policy
https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy
<b>The National Strategy for Artificial Intelligence — a discussion paper on India’s path forward in AI, is a welcome step towards a comprehensive document that reflects the government's AI ambitions. The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability.</b>
<p style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/niti-aayog-discussion-paper"><strong>Download the Report</strong></a></p>
<hr />
<p style="text-align: justify; "><span>The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India.</span><span> It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market.</span><span> This blog is a follow up to the comments made by CIS on Twitter</span><span> on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.</span></p>
<p><strong>Identified Focus Areas for AI Intervention</strong></p>
<p style="text-align: justify; "><span>The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India.</span><span> Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts </span><span>on Lethal Autonomous Weapons Systems (LAWS) in 2018.</span><span> Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. </span><span>This should have been a focus point in the paper as it aims to identify applications with maximum social impact and inclusion.</span></p>
<p style="text-align: justify; "><span>In its vision for the use of AI in smart cities, the</span><span> paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety.</span><span> This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI.</span><span> Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.</span></p>
<p><span><strong>Harnessing the Power of AI and Accelerating Research</strong></span></p>
<p style="text-align: justify; "><span>One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised.</span><span> In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence).</span><span> However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI.</span><span> Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component.</span><span> The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI, innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes</span><span> would be more desirable. The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.</span></p>
<p><span><strong>Ethics, Privacy, Security and Safety</strong></span></p>
<p style="text-align: justify; "><span>In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force.</span><span> Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.</span></p>
<p><span><strong>Transparency</strong></span></p>
<p style="text-align: justify; "><span>The section on transparency and opening the Black Box has several lacunae.</span><span> First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act,</span><span> which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI</span><span> alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes) vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation.</span><span> It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.</span></p>
<p><span><strong>Fairness</strong></span></p>
<p style="text-align: justify; "><span>With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes.</span><span> However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:</span></p>
<ul>
<li><span>Self-certification</span></li>
<li><span>Certification by a self-regulatory body</span></li>
<li><span>Discrimination impact assessments</span></li>
<li><span>Investigations by the privacy regulator </span></li>
</ul>
<p><span>Such tools will proactively need to ensure</span><span> inclusion, diversity, and equity in composition and decisions.</span></p>
<p style="text-align: justify; "><span>Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.</span></p>
<p style="text-align: justify; "><span>Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google</span><span> are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.</span></p>
<p><span><strong>Privacy</strong></span></p>
<p style="text-align: justify; "><span>Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy).</span><span> In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy. Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee.</span><span> However CIS has pointed out that these principles are generic and not specific to data protection.</span><span> Moreover, the law chapter of IEEE’s ‘</span><em><span>Global Initiative on Ethics of Autonomous and Intelligent Systems’</span></em><span> has been ignored in favor of the chapter on ‘</span><em><span>Personal Data and Individual Access Control in Ethically Aligned Design</span></em><span>’</span><span> as the recommended international standard.</span><span> Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI. </span></p>
<p><span><strong>AI Regulation and Sectoral Standards</strong></span></p>
<p style="text-align: justify; "><span>The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector. However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks.</span><span> Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner.</span><span> That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.</span></p>
<p><span><strong>Security and Safety of AI Systems</strong></span></p>
<p style="text-align: justify; "><span>In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the explainability of AI.</span><span> Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems.</span><span> Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.</span></p>
<p><span><strong>Research and Education</strong></span></p>
<p style="text-align: justify; "><span>Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges,</span><span> the government could also ensure that ethics and rights are part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics. </span></p>
<p><span><strong>Data Standards and Data Sharing</strong></span></p>
<p style="text-align: justify; "><span>Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly.</span><span> Yet this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”</span><span>, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.</span></p>
<p><span><strong>An AI Garage for Emerging Economies</strong></span></p>
<p style="text-align: justify; "><span>The discussion paper also qualifies India as an “ideal test-bed”</span><span> for trying out AI related solutions. This is problematic since questions of regulation in India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.</span></p>
<p><span><strong>Conclusion</strong></span></p>
<p style="text-align: justify; "><span>In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI. </span></p>
<p style="text-align: justify; "><span>The blanket recommendations were provided without looking at its viability in each sector.</span><span> Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.</span><span>Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy. </span></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy'>https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy</a>
</p>
No publisherSunil Abraham, Elonnai Hickok, Amber Sinha, Swaraj Barooah, Shweta Mohandas, Pranav M Bidare, Swagam Dasgupta, Vishnu Ramachandran and Senthil KumarInternet GovernanceArtificial Intelligence2018-06-13T13:08:47ZBlog EntryArtificial Intelligence for Growth: Leveraging AI and Robotics for India's Economic Transformation
https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation
<b>Amber Sinha took part in the second international conference organized by ASSOCHAM at Hotel Shangri-La in New Delhi on April 27, 2018.</b>
<h3>Keynote Address</h3>
<p>12.15 p.m. - 12.30 p.m.: Shri Gopalakrishnan S., Joint Secretary, Ministry of Electronics and IT, Government of India</p>
<h3>Special Address</h3>
<p style="text-align: justify; ">12.30 p.m. - 12.45 p.m.: Dr. Pushpak Bhattacharyya, Director and Professor, Computer Science and Engg, IIT Patna and Chairman, BIS Committee for Standardisation in Artificial Intelligence</p>
<h2 style="text-align: justify; ">Panel Discussion</h2>
<h3>Session Moderator</h3>
<p>12.45 p.m. - 1.40 p.m.: Shri Sudipta Ghosh, India Leader, Data and Analytics, PwC</p>
<h3>Panelists</h3>
<ul>
<li>Shri Amber Sinha, Senior Programme Manager, Centre for Internet and Society</li>
<li>Shri Utpal Chakraborty, Lead Architect - AI, L&T Infotech </li>
<li>Shri Atul Rai, CEO & Co-Founder, Staqu Technologies</li>
<li>Shri Prabhat Manocha, IBM</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation'>https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-05-05T09:08:07ZNews ItemArtificial Intelligence in Governance: A Report of the Roundtable held in New Delhi
https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi
<b>This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.</b>
<p><span>Event Report: </span><a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance">Download</a><span> (PDF)</span></p>
<hr />
<p style="text-align: justify; ">This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.</p>
<p style="text-align: justify; ">The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.</p>
<p style="text-align: justify; "><span>The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.</span></p>
<h3><span>Meaning and Scope of AI</span></h3>
<p><span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"> </span></p>
<p dir="ltr" style="text-align: justify; "><span>One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.</span></p>
<p dir="ltr" style="text-align: justify; "><span>The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.</span></p>
<p dir="ltr" style="text-align: justify; "><span>Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.</span></p>
<h3><span>AI Sectoral Applications</span></h3>
<p><span>Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. </span></p>
<p><span>While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.</span></p>
<p style="text-align: justify; "><span>Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.</span></p>
<p style="text-align: justify; "><span>Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.</span></p>
<p style="text-align: justify; "><span>One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.</span></p>
<h3><span>Human Involvement with Automated Decision Making</span></h3>
<p style="text-align: justify; "><span>Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.</span></p>
<p style="text-align: justify; "><span>Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.</span></p>
<p style="text-align: justify; "><span>The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.</span></p>
<p style="text-align: justify; "><span>An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.</span></p>
<p style="text-align: justify; "><span>It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).</span></p>
<p style="text-align: justify; "><span>One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.</span></p>
<h3><span>The Social and Power Relations Surrounding AI</span></h3>
<p style="text-align: justify; "><span> </span></p>
<p style="text-align: justify; ">Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.</p>
<p style="text-align: justify; ">Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.</p>
<p style="text-align: justify; ">One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.</p>
<p style="text-align: justify; ">Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.</p>
<p style="text-align: justify; "><span>Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. </span></p>
<p style="text-align: justify; "><span>One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. </span></p>
<p style="text-align: justify; "><span>Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.</span></p>
<h3><span>Regulatory Approaches to AI</span></h3>
<p style="text-align: justify; "><span>Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.</span></p>
<p style="text-align: justify; "><span>Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.</span></p>
<p style="text-align: justify; "><span>The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.</span></p>
<p style="text-align: justify; "><span>Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.</span></p>
<h3><span>Challenges to Adopting AI</span></h3>
<p style="text-align: justify; "><span>Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.</span></p>
<p style="text-align: justify; "><span> </span></p>
<p id="_mcePaste">A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste">One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste">Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste" style="text-align: justify; ">A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste" style="text-align: justify; ">Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.</p>
<div></div>
<p style="text-align: justify; ">A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.</p>
<p style="text-align: justify; ">One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.</p>
<p style="text-align: justify; ">Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.</p>
<p style="text-align: justify; ">A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.</p>
<p style="text-align: justify; ">Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.</p>
<p style="text-align: justify; ">Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.</p>
<h3 style="text-align: justify; ">Conclusion</h3>
<p style="text-align: justify; ">The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.</p>
<hr />
<p style="text-align: justify; ">[1]. Automated decision making model where final decisions are made by a human operator</p>
<p style="text-align: justify; ">[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.</p>
<p style="text-align: justify; ">[3]. A completely autonomous decision making model requiring no human involvement</p>
<p style="text-align: justify; ">[4]. https://futureoflife.org/ai-principles/</p>
<p style="text-align: justify; ">[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</a>
</p>
No publisherSaman Goudarzi and Natallia KhaniejoInternet GovernanceArtificial IntelligencePrivacy2018-05-03T15:49:40ZBlog EntryAI and Manufacturing and Services in India: Looking Forward
https://cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward
<b>This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Manufacturing and Services: Looking Forward (hereinafter referred to as ‘the Roundtable’), conducted at The Energy Resource Institute (TERI), in Bangalore on January 19, 2018.</b>
<p> </p>
<h4>Event Report: <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-manufacturing-services">Download</a> (PDF)</h4>
<hr />
<p style="text-align: justify;">The Roundtable comprised of participants from different sides of the AI and manufacturing and services spectrum including practitioners, representatives from multinational companies, think tanks, academicians, and researchers. The Roundtable discussed various questions regarding AI in the manufacturing and services industry in India.</p>
<p style="text-align: justify;">The round of discussions began with initial observations from the in progress research that the Centre for Internet and Society (CIS) is undertaking, on the use of AI in manufacturing and services. Some of the uses of AI that the research had thus far identified across various sectors included AI platforms in IT services for accurate forecasting for businesses, AI driven automation of routine tasks in manufacturing and production, and AI driven analytics for forecasting in the agriculture sector. The discussion then proceeded to the benefits of using AI - including efficient and effective results, precision, and automation of repetitive maintenance tasks. The draft research also acknowledges that although the use of AI is beneficial in many ways, there are also some key concerns around job displacement, privacy, lack of awareness, and a needed capacity to fully understand and use new AI technologies. The draft research also identified a few key AI initiatives in India, such as Wipro Holmes, TCS Ignio, and G.E, that were providing solutions to help automating software maintenance tasks and helping in the smooth working of SAP (Systems, Applications & Products) operations. Innovative uses of AI in areas such as crop production (M.I.T.R.A.) and dairy optimization (StellApps) were also identified.</p>
<p style="text-align: justify;">To understand the present state of AI and impact of the same, the session was opened to discussion on the following questions: See the <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-manufacturing-services"><strong>full report here.</strong></a></p>
<p> </p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward'>https://cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward</a>
</p>
No publisherShweta Mohandas and Pranav M. BidareInternet GovernanceArtificial Intelligence2018-02-14T11:13:56ZBlog EntryArtificial Intelligence in India: A Compendium
https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium
<b>Artificial Intelligence (AI) is fast emerging as a key technological paradigm in different sectors across the globe including India.</b>
<p style="text-align: justify;">Towards understanding the state of AI in India, challenges to the development and adoption of the same, and ethical concerns that arise out of the use of AI - CIS is undertaking research to understand and document national developments, discourse, and impact (actual and potential) to ethical and regulatory solutions and compare the same against global developments in the space. As part of this, CIS is creating a compendium of reports that dive into the use of AI across sectors including healthcare, manufacturing, governance, and finance.</p>
<p style="text-align: justify;">Each report seeks to map the present state of AI in the respective sector. In doing so, it explores: <strong>Use</strong>: What is the present use of AI in the sector? What is the narrative and discourse around AI in the sector? <strong>Actors</strong>: Who are the key stakeholders involved in the development, implementation and regulation of AI in the sector? <strong> Impact: </strong>What is the potential and existing impact of AI in the sector? <strong>Regulation</strong>: What are the challenges faced in policy making around AI in the sector?</p>
<p style="text-align: justify;">The reports are as follows:</p>
<ul>
<li>
<div><a href="https://cis-india.org/internet-governance/ai-and-healthcare-report" class="internal-link" title="AI and Healthcare Report">AI and the Healthcare Industry in India</a></div>
</li>
<li>
<div><a class="external-link" href="http://cis-india.org/internet-governance/files/AIManufacturingandServices_Report_02.pdf">AI and the Manufacturing and Services Sector in India</a></div>
</li>
<li><a href="https://cis-india.org/internet-governance/files/ai-in-banking-and-finance" class="internal-link" title="AI in Banking and Finance">AI and the Banking and Finance Industry in India</a>: (19th June 2018 Update: This case study has been modified to remove interview quotes, which are in the process of being confirmed. The link above is the latest draft of the report.)</li><li><a href="https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf" class="internal-link" title="AI and Governance Case Study pdf">AI in the Governance Sector in India<br /></a></li></ul>
<div> </div>
<div> </div>
<hr />
The research is funded by Google India. Comments and feedback are welcome. The reports are drafts.
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium'>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium</a>
</p>
No publisherCentre for Internet & SocietyInternet GovernanceArtificial Intelligence2023-05-09T06:56:25ZBlog EntryRoundtable on AI and Finance in India
https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india
<b>Centre for Internet & Society (CIS) will hold a roundtable on artificial intelligence and finance in India on Wednesday, February 7, 2018 in association with HasGeek and the 50p Conference. The roundtable will take place from 2 p.m. to 5 p.m at TERI (The Energy Resources Institute) in Domlur, Bengaluru.</b>
<p style="text-align: justify; ">We invite you all to participate in this roundtable to share and build knowledge about trajectories of AI deployment across sub-sectors of banking in India and the emergent regulatory and public policy concerns.</p>
<p style="text-align: justify; ">The objective of the roundtable is to bring together various actors active across the fields of artificial intelligence, machine learning, cognitive computing, financial technologies,and big data credit scoring and online lending, to discuss pressing public policy issues in regards to the utilisation and implementation of AI in the banking and finance sectors of India.</p>
<p style="text-align: justify; ">These sectors currently find themselves at the early stages of AI adoption. Such technologies are being implemented to facilitate both front-end and back-end processes by a variety of players with the aim of improving the accessibility, customised user engagement, and quality of current financial services. Leading commercial banks in India have all been working to develop and deploy AI technologies either in house or in partnership with small and large-scale tech companies. Such initiatives have seen the deployment of numerous chatbots and humanoid robots for the purposes of customer service. More significant, however, is the use of such technology by banks and fintech actors to facilitate decision making behind the scenes, on a variety of financial issues including but not limited to credit-worthiness, fraud detection, and investments.</p>
<p style="text-align: justify; ">While these sectors are no strangers to the use of big data analytics and similar technologies in aiding with financial decision making and daily operations, the deployment of technologies such as machine learning and natural language processing is still very new. Due to the nascent nature of this phenomenon, little is known about the details of their implications for both producers and consumers. Furthermore, concerns regarding data ownership, liability, and consumer rights have all been raised in light of AI adoption. This roundtable will present us with an opportunity to discuss such issues and begin to fill this knowledge gap.</p>
<p style="text-align: justify; ">For agenda and event brochure <strong><a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-finance">click here</a>. </strong>For RSVP <a class="external-link" href="https://docs.google.com/forms/d/e/1FAIpQLSd1QFN8a5R3FPPLklDR0XQb1izzGFWzWtAilI5-UNO4EApAFQ/viewform">click here</a>. Read the <a class="external-link" href="http://cis-india.org/internet-governance/files/draft-roundtable-report-on-ai-and-banking">event report here</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india'>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india</a>
</p>
No publishersamanInternet GovernanceEventArtificial Intelligence2018-03-11T14:58:55ZEventRoundtable on A.I. and Manufacturing and Services
https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services
<b>The Centre for Internet and Society (CIS), Bangalore is organizing a roundtable on ‘A.I. and Manufacturing and Services’ on the 19th of January, 2018 from 2 to 5 pm at ‘The Energy and Resources Institute’ (TERI) Bangalore. The Roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies on manufacturing processes and services.</b>
<p style="text-align: justify; ">Since the Industrial Revolution machines have substituted human labour and helped industries save time and money. This was succeeded by the advent of computers and technology which helped in completing tasks with better speed and accuracy than the human brain. The emergence of machine-learning technology and artificial intelligence has now made machines capable of doing work that was earlier considered to be something that could only be done by humans. From the use of AI in understanding customer shopping trends to its use in making automobiles, AI is becoming more of a norm than an exception. The analytics of how customers shop is now helping companies forecast their manufacturing needs. The synergy of technology and machines i.e. smart manufacturing, not only changes manufacturing and shipping but also improves worker safety. Different forms of smart manufacturing are also starting to come up in India: Wipro and Infosys have launched AI platforms, and the Indian Institute of Science is developing a smart factory with support from Boeing Company and General Electric. Infosys has also released an AI platform, ‘Nia’, which is programmed to forecast revenue and understand customer behaviour.</p>
<p style="text-align: justify; ">The instances of use of machines to substitute human workforce, in some cases, has brought about a sense of worry. Recent trends in factory hiring show that jobs are being lost to automated forms of labour, further evidenced by a report from the research firm HorsesforSources, which predicts that India is set to lose 640,000 low-skilled job positions to automation by the year 2021.The IT sector in India is also under risk from the use of AI. Reports have also found that the rising unemployment in the IT sector has led to increased pressure on labour regulators.</p>
<p style="text-align: justify; ">Although there are some studies that state that the use of AI would bring about a market for people who would need to work along with AI, the FICCI and EY’s 2016 Report on the Future of jobs and its implication on Indian higher education suggests that one of the ways to combat the loss of jobs was reskilling and upskilling the labour force. India has taken the first step towards this by launching the National Skill Development Mission.</p>
<p style="text-align: justify; ">From the use of neural networks to monitor steel plants for packing and shipping groceries, the use of intelligent machines has begun disrupting traditional business models in the industry. However, these advancements raise questions around labour, ethics, liability, and machine-human cooperation. Dialogue and debate are needed to understand how AI is being used in manufacturing, the potential benefits, and challenges of the same, and a way forward that optimizes innovation and protects human rights.</p>
<h2 style="text-align: justify; ">Roundtable Agenda</h2>
<p>Friday 19th January | 2:00 p.m - 5:00 p.m.</p>
<div id="_mcePaste">2:00 - 2:30 Introduction and setting the scene</div>
<div id="_mcePaste">2:30 - 3:30 Discussion on the AI landscape in the manufacturing and services industry:</div>
<div></div>
<ul>
<li>Manner and extent of integration of AI into manufacturing and services</li>
<li>Relevant stakeholders and their roles in implementing AI in manufacturing and services</li>
<li>Future of AI and related technologies in AI in manufacturing and services </li>
<li>Impact on work and labour</li>
</ul>
<p>3:30 - 4:30 Discussion on challenges and solutions towards regulating AI in India:</p>
<ul>
<li>Challenges faced in the conception and implementation of the AI product/ service, and reasons for such challenges.</li>
<li>Regulatory provisions for implementation of AI in the manufacturing and services under the existing laws, and need for reforms.</li>
<li>Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions.</li>
</ul>
<p>4.30 - 5.00 Conclusion and way forward</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services'>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services</a>
</p>
No publisherAdminInternet GovernanceEventArtificial Intelligence2018-01-18T13:44:15ZEventArtificial Intelligence - Literature Review
https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review
<b>With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. However, interest in AI has been rekindled over the last few years, in no small measure due to the rapid advancement of the technology and its applications to real- world scenarios. In order to create policy in the field, understanding the literature regarding existing legal and regulatory parameters is necessary. This Literature Review is the first in a series of reports that seeks to map the development of AI, both generally and in specific sectors, culminating in a stakeholder analysis and contributions to policy-making. This Review analyses literature on the historical development of the technology, its compositional makeup, sector- specific impacts and solutions and finally, overarching regulatory solutions.</b>
<p>Edited by Amber Sinha and Udbhav Tiwari; Research Assistance by Sidharth Ray</p>
<hr />
<p style="text-align: justify; ">With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. With an increasing number of real-world implications over the last few years, however, interest in AI has been reignited over the last few years.</p>
<p style="text-align: justify; ">The rapid and dynamic pace of development of AI have made it difficult to predict its future path and is enabling it to alter our world in ways we have yet to comprehend. This has resulted in law and policy having stayed one step behind the development of the technology.</p>
<p style="text-align: justify; ">Understanding and analyzing existing literature on AI is a necessary precursor to subsequently recommending policy on the matter. By examining academic articles, policy papers, news articles, and position papers from across the globe, this literature review aims to provide an overview of AI from multiple perspectives.</p>
<p style="text-align: justify; ">The structure taken by the literature review is as follows:</p>
<ol>
<li>Overview of historical development</li>
<li>Definitional and compositional analysis</li>
<li>Ethical & Social, Legal, Economic and Political impact and sector-specific solutions</li>
<li>The regulatory way forward</li>
</ol>
<p style="text-align: justify; ">This literature review is a first step in understanding the existing paradigms and debates around AI before narrowing the focus to more specific applications and subsequently, policy-recommendations.</p>
<p style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-literature-review"><b>Download the full literature review</b></a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review'>https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review</a>
</p>
No publisherShruthi AnandInternet GovernanceArtificial IntelligencePrivacy2017-12-18T15:12:52ZBlog EntryRoundtable on Artificial Intelligence & Healthcare
https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare
<b>Centre for Internet & Society (CIS) is organizing a roundtable on artificial intelligence (AI) and healthcare at 'The Energy and Resources Institute' (TERI) in Bengaluru on November 30, 2017 from 2 p.m. to 5 p.m. The roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies in the Indian healthcare sector.</b>
<p style="text-align: justify; ">The Indian healthcare industry, powered by Artificial Intelligence, is moving into a new era of increased innovation and independence. With multiple new healthcare start-ups and large ICT companies such as Microsoft, IBM, and Google offering AI solutions to healthcare challenges in the country, it is evident that AI is attempting to enhance the accessibility, affordability, quality and awareness of healthcare in India. Major target areas sought to be enhanced by use of AI in healthcare include addressing the uneven ratio of skilled doctors to patients and making doctors more efficient at their jobs, delivery of personalized and high-quality healthcare to rural areas, and training doctors and nurses in complex procedures.</p>
<p style="text-align: justify; ">Through the application of machine learning, data mining, natural language processing (NLP), and advanced analytics, AI can help doctors in speedy diagnosis of diseases. AI is also mobilised as ‘smart advisors’ or virtual humans who are capable of making informed decisions by better comprehending data and information through sensing interfaces and analytics, in various forms.</p>
<p style="text-align: justify; ">Some of these forms include ‘customer service agents’ that can expedite simple tasks like appointment scheduling, or more complex decisions like selecting health plan benefits, ‘clinicians’ that can help with primary screening in understaffed rural areas possibly substituting for human labour, and ‘cognitive agents’ that can efficiently manage existing clinical knowledge alongside physicians, nurses and researchers, thereby reducing the cognitive load on humans. AI based Indian healthcare start-ups such as SigTuple, Aindra, Ten3T, Touchkin and many others are offering a range of solutions including automation of medical diagnosis, automated analysis of medical tests, detection and screening of diseases, wearable sensor based medical devices and monitoring equipment, patient management systems, predictive healthcare diagnosis and disease prevention.</p>
<p style="text-align: justify; ">However, AI in healthcare raises many potential concerns, a common one being the lack of comprehensive, representative, interoperable, and clean data - a challenge that is beginning to be addressed through the Electronic Health Records Standards developed by the Ministry of Health and Family Welfare in 2016 by the Ministry of Health and Family Welfare. Other major challenges include patient adoption and the need for personal interaction with doctors, concerns over mass-scale job losses, distrust in technology, and ethical concerns.</p>
<p style="text-align: justify; ">It is imperative to note that implementing AI in healthcare, which is bound to disrupt it, does not imply replacing doctors but augmenting their efforts to create a more efficient healthcare landscape in the country. A harmonious collaboration of man and machine is expected to bring about a meaningful and long-lasting impact and stakeholders should be prepared to adapt to this change and the challenges that come with it.</p>
<hr />
<h3 style="text-align: justify; ">Roundtable Agenda</h3>
<p dir="ltr"><span>Thursday, November 30, 2017, 2:00pm - 5:00pm </span></p>
<p dir="ltr"><span>2:00 - 2:30: Introduction and setting the scene </span></p>
<p dir="ltr"><span>2:30 - 3:30: Discussion on the AI landscape in health in India: </span></p>
<ul>
</ul>
<ul>
<li><span>Manner and extent of integration of AI into products/services of healthcare companies.</span><span></span></li>
<li><span>Relevant stakeholders and their roles in implementing AI into products/services of healthcare companies.</span><span></span></li>
<li><span>Future of AI and related technologies in the healthcare sector</span><span></span></li>
</ul>
<ul>
</ul>
<p dir="ltr" style="text-align: justify; "><span>3:30 - 4:30: Discussion on challenges and solutions towards regulating AI in India: </span></p>
<ul>
<li dir="ltr" style="list-style-type:disc; "><span>Challenges faced in the conception and implementation of the AI product/service, and reasons for such challenges.</span><span></span></li>
<li dir="ltr" style="list-style-type:disc; "><span>Regulatory provisions for implementation of AI in healthcare products/services under the existing laws, and need for reforms.</span><span></span></li>
<li dir="ltr" style="list-style-type:disc; "><span>Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions. </span></li>
</ul>
<hr />
<p><a class="external-link" href="http://cis-india.org/internet-governance/files/a-i-and-manufacturing-and-services">Click to download the invite</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare'>https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare</a>
</p>
No publisherAdminEventArtificial IntelligenceHealthcare2018-01-02T13:49:14ZEvent