The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 51 to 65.
Banking on artificial intelligence: In hiring drive, Bots are calling the shots now
https://cis-india.org/internet-governance/news/economic-times-anjali-venugopalan-june-4-2019-banking-on-artificial-intelligence
<b>Algorithms analyse expressions, tone to check for traits such as confidence, anger in video interviews. </b>
<p style="text-align: justify; ">The article by Anjali Venugopalan was <a class="external-link" href="https://economictimes.indiatimes.com/jobs/banking-on-artificial-intelligence-in-hiring-drive-bots-are-calling-the-shots-now/articleshow/69641832.cms">published in Economic Times </a>on June 4, 2019, Sunil Abraham was quoted. Also mirrored on <a class="external-link" href="https://tech.economictimes.indiatimes.com/news/technology/in-hiring-drive-bots-are-calling-the-shots-now/69641830">ET Tech.com</a>.</p>
<hr />
<p style="text-align: justify; ">The future of hiring is already upon us. Algorithms are analysing people’s expressions and tone of voice to check for traits such as “confidence” and “happiness” during video interviews. The robotic video assessment software is then used to hire candidates — customer service operators and assistant vice presidents alike — though the process comes with its own set of problems.</p>
<p style="text-align: justify; ">Axis Bank used algorithm-based video interviews — along with aptitude tests — to hire around 2,000 customer service officers from a pool of more than 40,000 applicants this year, said Rajkamal Vempati, HR head of the private sector bank, adding it could standardise and scale up the process of hiring.</p>
<p style="text-align: justify; ">HR managers only gave offer letters, he said.</p>
<p style="text-align: justify; ">Nirmal Singh, CEO of Wheebox, a division of PeopleStrong which carried out the hiring, said it trained the face-indexing software — sourced from Microsoft — using around 50,000 candidates who had applied to Axis Bank in 2017. The software picked up emotional states such as “nervousness” and “happiness” based on eye movements, expressions and tone of voice and marked the candidates, Singh said. Scores from candidates who were shortlisted were used to come up with the “cutoff ” for these traits. Nirmal Singh, CEO of Wheebox, a division of PeopleStrong which carried out the hiring, said it trained the face-indexing software — sourced from Microsoft — using around50,000 candidates who had applied to Axis Bank in 2017. The software picked up emotional states such as “nervousness” and “happiness” based on eye movements,expressions and tone of voice and marked the candidates, Singh said. Scores from candidates who were shortlisted were used to come up with the “cutoff ” for these traits.</p>
<p style="text-align: justify; ">Insurance provider Bajaj Allianz has hired more than 1,600 people, including underwriters and assistant vice presidents, with the help of robotic video assessments that analysed <span>behaviour, said Vikramjeet Singh, chief HR officer, adding it could help reduce human bias. </span><span>Insurance provider Bajaj Allianz has hired more than 1,600 people, including underwriters and assistant vice presidents, with the help of robotic video assessments that analysedbehaviour, said Vikramjeet Singh, chief HR officer, adding it could help reduce human bias.</span></p>
<h3><span>Concerns over Software's Biases</span></h3>
<p><span>Talview, a Palo Alto-headquartered company with operations in Singapore and the United States, provided the assessment for the insurer. </span></p>
<p style="text-align: justify; ">The software, sourced from Microsoft and IBM, can analyse states such as “anger” and “happiness” from expressions, “confidence” from voice tone and traits like “ability to work ina team” and “decisiveness” from text analysis, according to Rajeev Menon, chief product officer, Talview.</p>
<p style="text-align: justify; ">Candidates may be able to beat questionnaires by giving expected answers to questions like “Can you work in a team?”, but video assessments pick up on subtleties in expression and vocabulary, and cannot be gamed, Menon said.Be that as it may, Amazon.com scrapped its artificial intelligence-based recruiting system after it found the AI system biased against women, according to an October 2018 report by Reuters.</p>
<p style="text-align: justify; ">The AI system was drawing on data from the past, where more men had made it into the company than women.“If you can fool a human, you can fool a computer,” said Sunil Abraham, executive director of Centre for Internet and Society.Recruitment algorithms could “homogenise the emotional economy” by forcing people to act a certain way, he said.</p>
<p style="text-align: justify; ">Since the software is based on expressions and tone of voice, it could disadvantage less expressive people, like those who are autistic, said Wheebox’s Singh.</p>
<p style="text-align: justify; ">Facial recognition by companies such as IBM, Microsoft and Amazon got the gender of a dark-skinned woman wrong one out of three times (20-35% error rate), a 2018 study by MIT researcher Joy Buolamwini found. For white males, the error was 0.8%.</p>
<h3 style="text-align: justify; ">Video Assessments</h3>
<p style="text-align: justify; ">Facial recognition has nothing to do with video analytics, Wheebox’s Singh said. The two are, however, closely linked, said Animashree Anandkumar, professor of computing andmathematical science at California Institute of Technology.</p>
<p style="text-align: justify; ">She said such software was “deeply problematic”, as it could correlate wrong factors (likegender or skin colour) and show that as the cause for success. It is possible dark-skinned people would be disadvantaged, said Menon of Talview. Thecompany uses facial expression as just one input among many and gives it a low weightage, he said.The software they use is only 39% accurate, and will improve with more data, said and will improve with more data, said Ridhima Gauba, co-founder of Interview Air, a Navi Mumbai-based company that provides a similar service to companies and colleges.</p>
<p style="text-align: justify; ">Companies also say video assessments are a risky business.</p>
<p style="text-align: justify; ">Bajaj Allianz does not use video assessments for recruitments beyond middle management. It is “important to see a person physically” when hiring for senior positions, said Asha Sharma, manager (corporate HR) of Everest Industries.</p>
<p style="text-align: justify; ">The company, however, uses pre-recorded video interviews — where the computer asks questions — to hire juniors from campuses, she said.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/economic-times-anjali-venugopalan-june-4-2019-banking-on-artificial-intelligence'>https://cis-india.org/internet-governance/news/economic-times-anjali-venugopalan-june-4-2019-banking-on-artificial-intelligence</a>
</p>
No publisherAnjali VenugopalanInternet GovernanceArtificial Intelligence2019-07-02T05:38:26ZNews ItemArtificial Intelligence: a Full-Spectrum Regulatory Challenge [Working Draft]
https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft
<b></b>
<p>Today, there are certain misconceptions regarding the regulation of AI. Some corporations would like us to believe that AI is being developed and used in a regulatory vacuum. Others in civil society organisations believe that AI is a regulatory circumvention strategy deployed by corporations. As a result, these organisations call for onerous regulations targeting corporations. However, some uses of AI by corporations can be completely benign and some uses AI by the state can result in the most egregious human rights violations. Therefore policy makers need to throw every regulatory tool from their arsenal to unlock the benefits of AI and mitigate its harms.</p>
<p>This policy brief proposes a granular, full spectrum approach to the regulation of AI depending on who is using AI, who is impacted by that use and what human rights are impacted. Everything from deregulation, to forbearance, to updated regulations, to absolute and blanket prohibitions needs to be considered depending on the specifics. This approach stands in contrast to approaches of ethics, omnibus law, homogeneous principles, and human rights, which will result in inappropriate under-regulation or over-regulation of the sector.</p>
<p>Find a copy of the working draft <a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF">here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft'>https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft</a>
</p>
No publishersunilRegulatory Practices LabInternet GovernanceArtificial Intelligence2020-08-04T06:10:13ZBlog EntryArtificial Intelligence in India: A Compendium
https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium
<b>Artificial Intelligence (AI) is fast emerging as a key technological paradigm in different sectors across the globe including India.</b>
<p style="text-align: justify;">Towards understanding the state of AI in India, challenges to the development and adoption of the same, and ethical concerns that arise out of the use of AI - CIS is undertaking research to understand and document national developments, discourse, and impact (actual and potential) to ethical and regulatory solutions and compare the same against global developments in the space. As part of this, CIS is creating a compendium of reports that dive into the use of AI across sectors including healthcare, manufacturing, governance, and finance.</p>
<p style="text-align: justify;">Each report seeks to map the present state of AI in the respective sector. In doing so, it explores: <strong>Use</strong>: What is the present use of AI in the sector? What is the narrative and discourse around AI in the sector? <strong>Actors</strong>: Who are the key stakeholders involved in the development, implementation and regulation of AI in the sector? <strong> Impact: </strong>What is the potential and existing impact of AI in the sector? <strong>Regulation</strong>: What are the challenges faced in policy making around AI in the sector?</p>
<p style="text-align: justify;">The reports are as follows:</p>
<ul>
<li>
<div><a href="https://cis-india.org/internet-governance/ai-and-healthcare-report" class="internal-link" title="AI and Healthcare Report">AI and the Healthcare Industry in India</a></div>
</li>
<li>
<div><a class="external-link" href="http://cis-india.org/internet-governance/files/AIManufacturingandServices_Report_02.pdf">AI and the Manufacturing and Services Sector in India</a></div>
</li>
<li><a href="https://cis-india.org/internet-governance/files/ai-in-banking-and-finance" class="internal-link" title="AI in Banking and Finance">AI and the Banking and Finance Industry in India</a>: (19th June 2018 Update: This case study has been modified to remove interview quotes, which are in the process of being confirmed. The link above is the latest draft of the report.)</li><li><a href="https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf" class="internal-link" title="AI and Governance Case Study pdf">AI in the Governance Sector in India<br /></a></li></ul>
<div> </div>
<div> </div>
<hr />
The research is funded by Google India. Comments and feedback are welcome. The reports are drafts.
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium'>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium</a>
</p>
No publisherCentre for Internet & SocietyInternet GovernanceArtificial Intelligence2023-05-09T06:56:25ZBlog EntryArtificial Intelligence in Governance: A Report of the Roundtable held in New Delhi
https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi
<b>This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.</b>
<p><span>Event Report: </span><a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance">Download</a><span> (PDF)</span></p>
<hr />
<p style="text-align: justify; ">This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.</p>
<p style="text-align: justify; ">The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.</p>
<p style="text-align: justify; "><span>The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.</span></p>
<h3><span>Meaning and Scope of AI</span></h3>
<p><span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"> </span></p>
<p dir="ltr" style="text-align: justify; "><span>One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.</span></p>
<p dir="ltr" style="text-align: justify; "><span>The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.</span></p>
<p dir="ltr" style="text-align: justify; "><span>Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.</span></p>
<h3><span>AI Sectoral Applications</span></h3>
<p><span>Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. </span></p>
<p><span>While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.</span></p>
<p style="text-align: justify; "><span>Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.</span></p>
<p style="text-align: justify; "><span>Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.</span></p>
<p style="text-align: justify; "><span>One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.</span></p>
<h3><span>Human Involvement with Automated Decision Making</span></h3>
<p style="text-align: justify; "><span>Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.</span></p>
<p style="text-align: justify; "><span>Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.</span></p>
<p style="text-align: justify; "><span>The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.</span></p>
<p style="text-align: justify; "><span>An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.</span></p>
<p style="text-align: justify; "><span>It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).</span></p>
<p style="text-align: justify; "><span>One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.</span></p>
<h3><span>The Social and Power Relations Surrounding AI</span></h3>
<p style="text-align: justify; "><span> </span></p>
<p style="text-align: justify; ">Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.</p>
<p style="text-align: justify; ">Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.</p>
<p style="text-align: justify; ">One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.</p>
<p style="text-align: justify; ">Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.</p>
<p style="text-align: justify; "><span>Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. </span></p>
<p style="text-align: justify; "><span>One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. </span></p>
<p style="text-align: justify; "><span>Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.</span></p>
<h3><span>Regulatory Approaches to AI</span></h3>
<p style="text-align: justify; "><span>Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.</span></p>
<p style="text-align: justify; "><span>Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.</span></p>
<p style="text-align: justify; "><span>The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.</span></p>
<p style="text-align: justify; "><span>Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.</span></p>
<p style="text-align: justify; "><span>Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.</span></p>
<h3><span>Challenges to Adopting AI</span></h3>
<p style="text-align: justify; "><span>Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.</span></p>
<p style="text-align: justify; "><span> </span></p>
<p id="_mcePaste">A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste">One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste">Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste" style="text-align: justify; ">A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.</p>
<div id="_mcePaste"></div>
<p id="_mcePaste" style="text-align: justify; ">Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.</p>
<div></div>
<p style="text-align: justify; ">A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.</p>
<p style="text-align: justify; ">One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.</p>
<p style="text-align: justify; ">Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.</p>
<p style="text-align: justify; ">A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.</p>
<p style="text-align: justify; ">Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.</p>
<p style="text-align: justify; ">Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.</p>
<h3 style="text-align: justify; ">Conclusion</h3>
<p style="text-align: justify; ">The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.</p>
<hr />
<p style="text-align: justify; ">[1]. Automated decision making model where final decisions are made by a human operator</p>
<p style="text-align: justify; ">[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.</p>
<p style="text-align: justify; ">[3]. A completely autonomous decision making model requiring no human involvement</p>
<p style="text-align: justify; ">[4]. https://futureoflife.org/ai-principles/</p>
<p style="text-align: justify; ">[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</a>
</p>
No publisherSaman Goudarzi and Natallia KhaniejoInternet GovernanceArtificial IntelligencePrivacy2018-05-03T15:49:40ZBlog EntryArtificial Intelligence for India's Transformation
https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation
<b>ASSOCHAM's 3rd International Conference was organized at Hotel Imperial in New Delhi. Amber Sinha a session on use, impact and ethics in AI. </b>
<p>Click to <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-in-ethics-agenda/view">view the agenda</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation'>https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-03-20T01:38:48ZNews ItemArtificial Intelligence for Growth: Leveraging AI and Robotics for India's Economic Transformation
https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation
<b>Amber Sinha took part in the second international conference organized by ASSOCHAM at Hotel Shangri-La in New Delhi on April 27, 2018.</b>
<h3>Keynote Address</h3>
<p>12.15 p.m. - 12.30 p.m.: Shri Gopalakrishnan S., Joint Secretary, Ministry of Electronics and IT, Government of India</p>
<h3>Special Address</h3>
<p style="text-align: justify; ">12.30 p.m. - 12.45 p.m.: Dr. Pushpak Bhattacharyya, Director and Professor, Computer Science and Engg, IIT Patna and Chairman, BIS Committee for Standardisation in Artificial Intelligence</p>
<h2 style="text-align: justify; ">Panel Discussion</h2>
<h3>Session Moderator</h3>
<p>12.45 p.m. - 1.40 p.m.: Shri Sudipta Ghosh, India Leader, Data and Analytics, PwC</p>
<h3>Panelists</h3>
<ul>
<li>Shri Amber Sinha, Senior Programme Manager, Centre for Internet and Society</li>
<li>Shri Utpal Chakraborty, Lead Architect - AI, L&T Infotech </li>
<li>Shri Atul Rai, CEO & Co-Founder, Staqu Technologies</li>
<li>Shri Prabhat Manocha, IBM</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation'>https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-05-05T09:08:07ZNews ItemArtificial Intelligence and Data Initiative
https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative
<b>On 3 May 2019 Arindrajit Basu attended a meeting of the Artificial Intelligence and Data Initiative held at IIC in Delhi. I am a member of the Working Group and co-authoring a report with Anindya Chaudhuri of Global Development Network on the prospect of collaborations in Public uses of AI.</b>
<p>The agenda can be <a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-and-data-initiative">viewed here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative'>https://cis-india.org/internet-governance/news/artificial-intelligence-and-data-initiative</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-05-14T15:06:02ZNews ItemArtificial Intelligence - Literature Review
https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review
<b>With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. However, interest in AI has been rekindled over the last few years, in no small measure due to the rapid advancement of the technology and its applications to real- world scenarios. In order to create policy in the field, understanding the literature regarding existing legal and regulatory parameters is necessary. This Literature Review is the first in a series of reports that seeks to map the development of AI, both generally and in specific sectors, culminating in a stakeholder analysis and contributions to policy-making. This Review analyses literature on the historical development of the technology, its compositional makeup, sector- specific impacts and solutions and finally, overarching regulatory solutions.</b>
<p>Edited by Amber Sinha and Udbhav Tiwari; Research Assistance by Sidharth Ray</p>
<hr />
<p style="text-align: justify; ">With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. With an increasing number of real-world implications over the last few years, however, interest in AI has been reignited over the last few years.</p>
<p style="text-align: justify; ">The rapid and dynamic pace of development of AI have made it difficult to predict its future path and is enabling it to alter our world in ways we have yet to comprehend. This has resulted in law and policy having stayed one step behind the development of the technology.</p>
<p style="text-align: justify; ">Understanding and analyzing existing literature on AI is a necessary precursor to subsequently recommending policy on the matter. By examining academic articles, policy papers, news articles, and position papers from across the globe, this literature review aims to provide an overview of AI from multiple perspectives.</p>
<p style="text-align: justify; ">The structure taken by the literature review is as follows:</p>
<ol>
<li>Overview of historical development</li>
<li>Definitional and compositional analysis</li>
<li>Ethical & Social, Legal, Economic and Political impact and sector-specific solutions</li>
<li>The regulatory way forward</li>
</ol>
<p style="text-align: justify; ">This literature review is a first step in understanding the existing paradigms and debates around AI before narrowing the focus to more specific applications and subsequently, policy-recommendations.</p>
<p style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/artificial-intelligence-literature-review"><b>Download the full literature review</b></a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review'>https://cis-india.org/internet-governance/blog/artificial-intelligence-literature-review</a>
</p>
No publisherShruthi AnandInternet GovernanceArtificial IntelligencePrivacy2017-12-18T15:12:52ZBlog EntryAmazon launches Machine Learning-based platform for healthcare space
https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space
<b>Amazon’s Comprehend Medical platform uses a new HIPAA-eligible machine learning service to process unstructured medical text and information such as dosages, symptoms and signs, and patient diagnosis.</b>
<p style="text-align: justify; ">The article by Kul Bhushan was published in the <a class="external-link" href="https://www.hindustantimes.com/tech/nov-28-amazon-launches-machine-learning-driven-platform-for-healthcare-space/story-3EuXjDiVO8NLBxjOMKkopO.html">Hindustan Times</a> on November 28, 2018.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">With an objective to push deeper into the health space, Amazon has introduced a new <a href="https://www.hindustantimes.com/topic/machine-learning">Machine Learning</a> (ML) software to analyse medical records for better treatments of patients and reduce overall expenditure.</p>
<p style="text-align: justify; ">Unveiled at the company’s re:Invent cloud conference in Las Vegas, Amazon’s Comprehend Medical platform uses a new “HIPAA-eligible machine learning service that allows developers to process unstructured medical text and identify information such as patient diagnosis, treatments, dosages, symptoms and signs, and more.”</p>
<p style="text-align: justify; ">“Comprehend Medical helps health care providers, insurers, researchers, and clinical trial investigators as well as health care IT, biotech, and pharmaceutical companies to improve clinical decision support, streamline revenue cycle and clinical trials management, and better address data privacy and protected health information (PHI) requirements,” explains the company on its <a href="https://aws.amazon.com/blogs/machine-learning/introducing-medical-language-processing-with-amazon-comprehend-medical/" rel="nofollow">website</a>.</p>
<p style="text-align: justify; ">Amazon aims to mitigate the time spent on manually analysing medical data of a patient. The company hopes the software will ultimately empower users to make a more informed decision about their health and even things like scheduling care visits.</p>
<p style="text-align: justify; ">“Unlocking this information from medical language makes a variety of common medical use cases easier and cost-effective, including: clinical decision support (e.g., getting a historical snapshot of a patient’s medical history), revenue cycle management (e.g., simplifying the time-intensive manual process of data entry), clinical trial management (e.g., by identifying and recruiting patients with certain attributes into clinical trials), building population health platforms, and helping address (PHI) requirements (e.g., for privacy and security assurance.),” the company added.</p>
<p style="text-align: justify; ">Amazon also pointed out that some of the medical institutes such as Seattle’s Fred Hutchinson Cancer Research Center and Roche Diagnostics have already implemented the software.</p>
<p style="text-align: justify; ">Amazon’s expansion into the healthcare space comes after it acquired health-focused startup PillPack for $1 billion earlier this year. Apart from Amazon, other technology companies like Apple and Microsoft are investing into the healthcare space.</p>
<p style="text-align: justify; ">Apple is already offering HealthKit and CareKit platforms to develop apps focused on health. The company earlier this year launched <a href="https://www.hindustantimes.com/tech/apple-watch-series-4-launched-with-ecg-compatibility-new-design/story-2LqdNq7YjAXGU3HEH5om8N.html">Apple Watch Series 4 with ECG support</a>. Microsoft, however, has deeper footprints in the health segment. The company is building a bunch of Artificial Intelligence-based tools for healthcare.</p>
<p style="text-align: justify; ">For instance, Microsoft’s Project InnerEye uses machine learning technology to build tools for automatic, quantitative analysis of three-dimensional radiological images.</p>
<p style="text-align: justify; ">According to various reports, Artificial Intelligence is going to make a big impact in the healthcare industry. An Accenture report in 2017 <a href="https://www.accenture.com/t20171215T032059Z__w__/us-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf" rel="nofollow" target="_blank">predicted</a> that the AI apps can create $150 billion in annual savings for the United States alone.</p>
<p style="text-align: justify; ">Back in India, the adoption of AI in healthcare is growing. According to a report by the Centre for Internet and Society India, “the use of AI in healthcare in India is increasing with new startups and large ICT companies offering AI solutions for healthcare challenges in the country.”</p>
<p style="text-align: justify; ">Bengalure-based startup mfine has developed an AI-based healthcare platform which learns medical standards and protocols and diagnosis and treatment methods to further help the doctors with necessary data and analysis. The company earlier this year raised $4.2 million in funding.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space'>https://cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-03T00:23:06ZNews ItemAI Opera- AI as a total work of art
https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art
<b>On October 11, 2019, Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. </b>
<p style="text-align: justify; ">The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, <a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&event_id=21670394">click here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'>https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-10-14T14:30:56ZNews ItemAI in the Future of Work
https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work
<b>Artificial Intelligence and allied technologies form part of what is being called the fourth Industrial Revolution.</b>
<p style="text-align: justify; ">Some analysts <a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf">project the loss of jobs</a> as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will <a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf">enhance and complement</a> human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from <a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf">machine-to-machine interactions on the factory floor</a>, to automated decision-making systems.</p>
<p style="text-align: justify; ">Some analysts <a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf">project the loss of jobs</a> as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will <a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf">enhance and complement</a> human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from <a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf">machine-to-machine interactions on the factory floor</a>, to automated decision-making systems.</p>
<h3 style="text-align: justify; ">Studying the Platform Economy</h3>
<p style="text-align: justify; ">The platform economy, in particular, is dependent on AI in the design of aggregator platforms that form a two-way market between customers and workers. Platforms deploy AI at a number of different stages, from recruitment to assignment of tasks to workers. AI systems often reflect existing social biases, as they are built using biased datasets, and by non-diverse teams that are not attuned to such biases. This has been the case in the platform economy as well, where biased systems impact the ability of marginalised workers to access opportunities. To take an example, Amazon’s algorithm to filter workers’ resumes was <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">biased against women</a> because it was trained on 10 years of hiring data, and ended up reflecting the underrepresentation of women in the tech industry. That is not to say that algorithms introduce biases where they didn’t exist earlier, but that they take existing biases and hard code them into systems in a systematic and predictable manner.</p>
<p style="text-align: justify; ">Biases are made even more explicit in marketplace platforms, that allow employers to review workers’ profiles and skills for a fee. In a study of platforms offering home-based services in India, we found that marketplace platforms offer filtering mechanisms which allow employers to filter workers by demographic characteristics such as gender, age, religion, and in one case, caste (the research publication is forthcoming). The design of the platform itself, in this case, encourages and enables discrimination of workers. One of the leading platforms in India had ‘Hindu maid’ and ‘Hindu cook’ as its top search term, reflecting the ways in which employers from the dominant religion are encouraged to discriminate against workers from minority religions in the Indian platform economy.</p>
<p style="text-align: justify; ">Another source of bias in the platform economy are rating and pricing systems, which can reduce the quality and quantum of work offered to marginalised workers. Rating systems exist across platform types - those that offer on-demand or location-based work, microwork platforms, and marketplace platforms. They allow customers and employers to rate workers on a scale, and are most often one-way feedback systems to review a worker’s performance (as our forthcoming research discusses, we found very few examples of feedback loops that also allow workers to rate employers). Rating systems <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">have been found</a> to be a source of anxiety for workers, as they can be rated poorly for unfair reasons, including their demographic characteristics. Most platforms penalise workers for poor ratings, and may even stop them from accessing any tasks at all if their ratings fall below a certain threshold. Without adequate grievance redressal mechanisms that allow workers to contest poor ratings, rating systems are prone to reflect customer biases while appearing neutral. It is difficult to assess the level of such bias without companies releasing data comparing ratings of workers by their demographic characteristics, but it <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">has been argued</a> that there is ample evidence to believe that demographic characteristics will inevitably impact workers ratings due to widespread biases.</p>
<h3>Searching for a Solution</h3>
<p style="text-align: justify; ">It is clear that platform companies need to be pushed into solving for biases and making their systems more fair and non-discriminatory. Some companies, such as Amazon in the example above, have responded by suspending algorithms that are proven to be biased. However, this is a temporary fix, as companies rarely seek to drop such projects indefinitely. In the platform economy, where algorithms are central to the business model of companies, complete suspension is near impossible. Amazon also tried another quick fix - it <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">altered the algorithm</a> to respond neutrally to terms such as ‘woman’. This is a process known as debiasing the model, through which any biased connections (such as between the word ‘woman’ and downgrading) being made by the algorithm are explicitly removed. Another solution is diversifying or debiasing datasets. In this example, the algorithm could be fed a larger sample of resumes and decision-making logics from industries that have a higher representation of women.</p>
<p style="text-align: justify; ">Another set of solutions could be drawn from anti-discrimination law, which prohibit discrimination at the workplace. In India, anti-discrimination laws protect against wage inequality, as well as discrimination at the stage of recruitment for protected groups such as transgender persons. While it can be argued that biased rating systems lead to wage inequality, there are several barriers to applying anti-discrimination law for workers in the platform economy. One, most jurisdictions, including India, protect only employees from discrimination, not self-employed contractors. Another challenge is the lack of data to prove that rating or recruitment algorithms are discriminatory, without which legal recourse is impossible. <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">Rosenblat et al.</a> (2016) discuss these challenges in the context of the US, suggesting solutions such as addressing employment misclassification or modifying pleading requirements to bring platform workers under the protection of the law.</p>
<p style="text-align: justify; ">Feminist principles point to structural shifts that are required to ensure robust protections for workers. Analysing algorithmic systems from a feminist lens indicates several points in the design at which interventions must be focused to ensure impact. The teams designing algorithms need to be made more diverse, along with integrating an explicit focus on assessing the impact of systems at the stage of design. Companies need to be more transparent with their data, and encourage independent audits of their systems. Corporate and government actors must be held to account to fix broken AI systems.</p>
<hr />
<p style="text-align: justify; "><span>Ambika Tandon is a Senior Researcher at the <a href="https://cis-india.org/">Centre for Internet & Society (CIS)</a> in India, where she studies the intersections of gender and technology. She focuses on women’s work in the digital economy, and the impact of emerging technologies on social inequality. She is also interested in developing feminist methods for technology research. Ambika tweets at <a href="https://twitter.com/AmbikaTandon">@AmbikaTandon</a>.</span></p>
<p style="text-align: justify; ">The blog was originally <a class="external-link" href="https://ethicalsource.dev/blog/ai-in-the-future-of-work/">published in the Organization for Ethical Source</a></p>
<p>
For more details visit <a href='https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work'>https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work</a>
</p>
No publisherambikaCISRAWResearchers at WorkArtificial IntelligenceFuture of Work2021-12-07T01:51:42ZBlog EntryAI in Healthcare
https://cis-india.org/internet-governance/news/ai-in-healthcare
<b>The Center for Information Technology and Public Policy (CITAPP) and the International Institute of Information Technology Bangalore (IIITB) invited Radhika Radhakrishnan for a talk at IIIT-Bangalore on September 13, 2019. </b>
<p style="text-align: justify; ">In her talk, she critically questioned the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in India (including the private sector, non-profits, and the Indian State) from a feminist standpoint. Specific to healthcare in India, such a narrative has been employed towards solving development challenges (such as a shortage of medical practitioners in remote regions of the country) through the introduction of AI applications targeted towards the sick-poor. Through her research and fieldwork, she analysed the layers of expropriation and experimentation that come into play when AI technologies become a method of using 'diverse' bodies and medical records of the sick-poor as ‘data’ to train proprietary AI algorithms at a low cost in the absence of effective State regulatory mechanisms. She argued that structural challenges (such as lack of incentives for medical practitioners to join public healthcare) get reframed into opportunities to substitute labour (people) by capital (technology) through innovation of “spectacular technologies” such as AI. Throughout the talk, she also highlighted the methodologies she used to conduct this research.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/ai-in-healthcare'>https://cis-india.org/internet-governance/news/ai-in-healthcare</a>
</p>
No publisherAdminIndustry 4.0Internet GovernanceArtificial Intelligence2019-09-19T16:15:24ZNews ItemAI for Social Good Summit
https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit
<b>Arindrajit Basu was a speaker at the event co-organized by Google AI and United Nations ESCAP on December 13, 2018 in Bangkok, Thailand.</b>
<p class="moz-quote-pre" style="text-align: justify; ">Arindrajit spoke at the panel " How can governments use AI in Public Service Delivery" along with Malavika Jayaram, Jake Lucci,Punit Shukla,Simon Schmooly and Gal Oren. He presented CIS research on AI in agriculture in Karnataka-which will be published as part of a compendium documenting case studies worldwide soon.</p>
<p class="moz-quote-pre" style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/ai-for-social-good-summit">Click to read more</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit'>https://cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-25T01:02:01ZNews ItemAI for Good Workshop
https://cis-india.org/internet-governance/news/ai-for-good-workshop
<b>Pranav Manjesh Bidare attended a workshop on AI for Good, organised by Swissnex India, and Wadhwani AI in Bangalore on May 22, 2019. </b>
<p>The workshop was a forerunner to the <a class="external-link" href="https://aiforgood.itu.int/">AI for Good Global Summit</a>. More recommendations can be made at <a class="moz-txt-link-freetext" href="https://www.policykitchen.com/group/19/stream">https://www.policykitchen.com/group/19/stream</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/ai-for-good-workshop'>https://cis-india.org/internet-governance/news/ai-for-good-workshop</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-06-05T14:47:27ZNews ItemAI for Good
https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival
<b>CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.</b>
<p>The report was edited by Elonnai Hickok.</p>
<hr />
<p style="text-align: justify; ">The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.</p>
<h3>Methodology</h3>
<p class="Normal1" style="text-align: justify; ">The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.</p>
<p class="Normal1" style="text-align: justify; ">Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.</p>
<p class="Normal1" style="text-align: justify; "><img src="https://cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /></p>
<h3 class="Normal1" style="text-align: justify; ">Responses</h3>
<p class="Normal1" style="text-align: justify; "><img src="https://cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /></p>
<h3 class="Normal1" style="text-align: justify; ">Analysis</h3>
<p>Even as the responses were varied, they had a few key similarities and observations.</p>
<h3>Participants’ Familiarity with AI</h3>
<p style="text-align: justify; ">The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.</p>
<h3 style="text-align: justify; ">Perception of AI Among Participants</h3>
<p class="Normal1">With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.</p>
<p class="Normal1" style="text-align: justify; ">The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.</p>
<p class="Normal1" style="text-align: justify; "><span>On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.</span> <a name="fr1"></a> <span>Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. </span><span>By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.</span><a name="fr2"></a></p>
<h3 class="Normal1" style="text-align: justify; ">Form and Visual Design of the AI Concepts</h3>
<p style="text-align: justify; ">In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.</p>
<h3 style="text-align: justify; ">Accessibility of the Interfaces</h3>
<p style="text-align: justify; ">The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.</p>
<p style="text-align: justify; ">The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.<a name="fr3"></a></p>
<h3 style="text-align: justify; ">Biases Based on Gender</h3>
<p style="text-align: justify; ">We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.</p>
<p style="text-align: justify; ">These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.<a name="fr4"></a> <span>Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.</span></p>
<h3 style="text-align: justify; ">Concerns around Privacy</h3>
<p style="text-align: justify; ">Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.</p>
<h3 style="text-align: justify; ">Choices between Principles</h3>
<p class="Normal1" style="text-align: justify; ">This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.</p>
<p class="Normal1" style="text-align: justify; ">The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.</p>
<h3 class="Normal1" style="text-align: justify; ">Conclusion</h3>
<p class="Normal1" style="text-align: justify; ">The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.</p>
<hr />
<p class="Normal1" style="text-align: justify; "><span>[1]. </span><a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html">https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html</a></p>
<p>[2]. <a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/">https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/</a></p>
<p>[3]. <a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition">https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition</a></p>
<p>[4]. <a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied">https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'>https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</a>
</p>
No publisherShweta Mohandas and Saumyaa NaiduInternet GovernanceArtificial Intelligence2019-10-13T05:32:28ZBlog Entry