<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 51 to 65.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/unpacking-algorithmic-infrastructures"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/unbox-2019-festival"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-seminar-series"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival">
    <title>AI for Good</title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</link>
    <description>
        &lt;b&gt;CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.&lt;/b&gt;
        &lt;p&gt;The report was edited by Elonnai Hickok.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.&lt;/p&gt;
&lt;h3&gt;Methodology&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Responses&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Analysis&lt;/h3&gt;
&lt;p&gt;Even as the responses were varied, they had a few key similarities and observations.&lt;/p&gt;
&lt;h3&gt;Participants’ Familiarity with AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Perception of AI Among Participants&lt;/h3&gt;
&lt;p class="Normal1"&gt;With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.&lt;/span&gt; &lt;a name="fr1"&gt;&lt;/a&gt; &lt;span&gt;Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. &lt;/span&gt;&lt;span&gt;By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.&lt;/span&gt;&lt;a name="fr2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Form and Visual Design of the AI Concepts&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Accessibility of the Interfaces&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.&lt;a name="fr3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Biases Based on Gender&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.&lt;a name="fr4"&gt;&lt;/a&gt; &lt;span&gt;Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.&lt;/span&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concerns around Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Choices between Principles&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;[1]. &lt;/span&gt;&lt;a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html"&gt;https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[2]. &lt;a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/"&gt;https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[3]. &lt;a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition"&gt;https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[4]. &lt;a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied"&gt;https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'&gt;https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Saumyaa Naidu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-13T05:32:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future">
    <title>Farming the Future: Deployment of Artificial Intelligence in the agricultural sector in India</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future</link>
    <description>
        &lt;b&gt;This case study was published as a chapter in the joint UNESCAP-Google publication titled Artificial Intelligence in Public Service Delivery. The chapter in its final form would not have been possible without the efforts and very useful interventions by our colleagues at Digital Asia Hub,Google, and UNESCAP.&lt;/b&gt;
        &lt;p&gt;&lt;img src="https://cis-india.org/home-images/Findings.jpg" alt="Findings" class="image-inline" title="Findings" /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although agriculture is a critical sector for India’s economic development, it continues to face many challenges including a lack of &lt;span&gt;modernization of agricultural methods, fragmented landholdings, erratic rainfalls, overuse of groundwater and a lack of access to &lt;/span&gt;&lt;span&gt;information on weather, markets and pricing. As state governments create policies and frameworks to mitigate these challenges, the &lt;/span&gt;&lt;span&gt;role of technology has often come up as a potential driver of positive change.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Farmers in the southern Indian states of Karnataka and Andhra Pradesh are facing significant challenges. For hundreds of years,these farmers have relied on traditional agricultural methods to make sowing and harvesting decisions, but now volatile weather patterns and shifting monsoon seasons are making such ancient wisdom obsolete. Farmers are unable to predict weather patterns or crop yields accurately, making it difficult for them to make informed financial and operational decisions associated with planting and harvesting. Erratic weather patterns particularly affect those farmers who reside in remote areas, cut off from meaningful accessto infrastructure and information. In addition to a lack of vital weather information, farmers may lack information about market conditions and may then sell their crops to intermediaries at below-market prices.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Against this backdrop, the state governments and local partners in southern India teamed up with Microsoft to develop predictive AI services to help smallholder farmers to improve their crop yields and give them greater price control. Since 2016 three applications have been developed and applied for use in these communities, two of which are discussed in this case study: the AI-sowing app and the price forecasting model.&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf"&gt;Click to read&lt;/a&gt; the report here.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Arindrajit Basu, Siddharth Sonkar and Pranav M B</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-16T13:41:02Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report">
    <title>Panelist at launch of Google-UNESCAP AI Report</title>
    <link>https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report</link>
    <description>
        &lt;b&gt;Arindrajit Basu was a speaker at the panel launching the Google-UNESCAP AI Report at the GovInsider Forum held at the United Nations Convention Centre in Bangkok on October 16, 2019. &lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/launch-the-ai-report"&gt;view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report'&gt;https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-11-02T06:48:25Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/unpacking-algorithmic-infrastructures">
    <title>Unpacking Algorithmic Infrastructures: Mapping the Data Supply Chain in the Healthcare Industry in India </title>
    <link>https://cis-india.org/raw/unpacking-algorithmic-infrastructures</link>
    <description>
        &lt;b&gt;The Unpacking Algorithmic Infrastructures project, supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to study the Al data supply chain infrastructure in healthcare in India, and aims to critically analyse auditing frameworks that are utilised to develop and deploy AI systems in healthcare. It will map the prevalence of Al auditing practices within the sector to arrive at an understanding of frameworks that may be developed to check for ethical considerations - such as algorithmic bias and harm within healthcare systems, especially against marginalised and vulnerable populations. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;There has been an increased interest in health data  in India over the recent years, where health data policies encourage  sharing of data with different entities, at the same time, there has  been a growing interest in deployment of Al in healthcare from startups,  hospitals, as well as multinational technology companies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Given the invisibility of  algorithmic infrastructures that underlie the digital economy and the  important decisions these technologies can make about patients' health,  it's important to look at how these systems are developed, how data  flows within them, how these systems are tested and verified and what  ethical considerations inform their deployment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ResearchersWork.png/@@images/00a848c7-b7f7-41b4-8bd9-45f2928fd44e.png" alt="Researchers at Work" class="image-inline" title="Researchers at Work" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;The &lt;/strong&gt;&lt;strong&gt;Unpacking Algorithmic Infrastructures&lt;/strong&gt; project,  supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to  study the Al data supply chain infrastructure in healthcare in India,  and aims to critically analyse auditing frameworks that are utilised to  develop and deploy AI systems in healthcare. It will map the prevalence  of Al auditing practices within the sector to arrive at an understanding  of frameworks that may be developed to check for ethical considerations  - such as algorithmic bias and harm within healthcare systems,  especially against marginalised and vulnerable populations.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Research Questions&lt;/h3&gt;
&lt;ol&gt;
&lt;li style="text-align: justify; "&gt;To what extent organisations take      ethical principles into  account when developing AI , managing the training      and testing  dataset, and while deploying the AI in the healthcare sector.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;What best practices for auditing can be      put in place based on  our critical understanding of AI data supply chains      and auditing  frameworks being employed in the healthcare sector.&lt;/li&gt;
&lt;li style="text-align: justify; "&gt;What is a possible auditing framework      that is best suited to organisations in the majority world.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Research Design and Methods&lt;/h3&gt;
&lt;p&gt;For this study, we will use a  comprehensive mixed methods approach. We will survey professionals  working towards designing, developing and deploying AI systems for  healthcare in India, across technology and healthcare organizations. We  will also undertake in-depth interviews with experts who are part of key  stakeholder groups.&lt;/p&gt;
&lt;p&gt;We hereby invite researchers,  technologists, healthcare professionals, and others working at the  intersection of Artificial Intelligence and Healthcare to speak to us  and help us inform the study. You may contact Shweta Monhandas at &lt;a href="mailto:shweta@cis-india.org"&gt;shweta@cis-india.org&lt;/a&gt;&lt;/p&gt;
&lt;ol&gt; &lt;/ol&gt; 
&lt;hr /&gt;
&lt;p&gt;Research Team: Amrita Sengupta, Chetna V. M.,  Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and Yatharth.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/unpacking-algorithmic-infrastructures'&gt;https://cis-india.org/raw/unpacking-algorithmic-infrastructures&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta, Chetna V. M., Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and Yatharth</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Health Tech</dc:subject>
    
    
        <dc:subject>RAW Blog</dc:subject>
    
    
        <dc:subject>Research</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2024-01-05T02:38:22Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/unbox-2019-festival">
    <title>Unbox Festival 2019: CIS organizes two Workshops</title>
    <link>https://cis-india.org/internet-governance/blog/unbox-2019-festival</link>
    <description>
        &lt;b&gt;Centre for Internet &amp; Society organized two workshops at the Unbox Festival 2019, in Bangalore, on 15 and 17 February 2019. &lt;/b&gt;
        &lt;h3 style="text-align: justify; "&gt;'What is your Feminist Infrastructure Wishlist?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The first workshop 'What is your Feminist Infrastructure Wishlist?' was on Feminist Infrastructure Wishlists that was conducted by P.P. Sneha and Saumyaa Naidu on  15 February 2019. The objective of the workshop was to explore what it means to have infrastructure that is feminist. How do we build spaces, networks, and systems that are equal, inclusive, diverse, and accessible? We will also reflect on questions of network configurations, expertise, labour and visibility. For reading material &lt;a class="external-link" href="https://feministinternet.org/"&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;AI for Good&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;With a backdrop of AI for social good, we explore existing applications of artificial intelligence, how we interact and engage with this technology on a daily basis. A discussion led by Saumyaa Naidu and Shweta Mohandas invited participants to examine current narratives around AI and imagine how these may transform with time. Questions around how we can build an AI for the future will become the starting point to trace its implications relating to social impact, policy, gender, design, and privacy. For reading materials see &lt;a class="external-link" href="https://ainowinstitute.org/AI_Now_2018_Report.pdf"&gt;AI Now Report 2018&lt;/a&gt;, &lt;a class="external-link" href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing"&gt;Machine Bias&lt;/a&gt;, and &lt;a class="external-link" href="https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistants-have-feminine-names/475884/"&gt;Why Do So Many Digital Assistants Have Feminine Names?&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For info on Unbox Festival, &lt;a class="external-link" href="http://unboxfestival.com/"&gt;click here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/unbox-2019-festival'&gt;https://cis-india.org/internet-governance/blog/unbox-2019-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>saumyaa</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Gender</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-02-26T01:53:39Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation">
    <title>Artificial Intelligence for India's Transformation</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation</link>
    <description>
        &lt;b&gt;ASSOCHAM's 3rd International Conference was organized at Hotel Imperial in New Delhi. Amber Sinha a session on use, impact and ethics in AI. &lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-in-ethics-agenda/view"&gt;view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-for-indias-transformation&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-20T01:38:48Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence">
    <title>International Conference on Justice Education:Legal Implications of Artificial Intelligence</title>
    <link>https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence</link>
    <description>
        &lt;b&gt;Arindrajit Basu attended the International Conference on Justice Education with the theme "Artificial Intelligence and its Legal Implications" at Institute of Law Nirma University. The event was organized by Nirma University in Ahmedabad on March 15 - 16, 2019. Arindrajit was a theme speaker for the panel on Legal Implications of Artificial Intelligence and was a judge of the presentations in the same session.&lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/icje-conference-schedule"&gt;read the agenda&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence'&gt;https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-20T15:52:29Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore">
    <title>Roundtable on Consumer Experiences with New Technologies in APAC (Singapore)</title>
    <link>https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore</link>
    <description>
        &lt;b&gt;Arindrajit Basu was invited to a Roundtable on Artificial Intelligence:Consumer Experiences with New Technologies (APAC region). &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The event &lt;span&gt;was hosted by Consumer International and delivered at Google, Singapore on March 26, 2019. CIS research and Arindrajit's inputs have been quoted in a report by the same name which will be released by Consumer International within the course of the next month.&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore'&gt;https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-04-15T10:25:57Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi">
    <title>Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi</link>
    <description>
        &lt;b&gt;This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.&lt;/b&gt;
        &lt;p&gt;&lt;span&gt;Event Report: &lt;/span&gt;&lt;a class="external-link" href="https://cis-india.org/internet-governance/files/ai-in-governance"&gt;Download&lt;/a&gt;&lt;span&gt; (PDF)&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation  from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Meaning and Scope of AI&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span id="docs-internal-guid-7edcf822-2698-f1fd-35d3-0bcc913c986a"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.&lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;AI Sectoral Applications&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Human Involvement with Automated Decision Making&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;The Social and Power Relations Surrounding AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Regulatory Approaches to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Challenges to Adopting AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p id="_mcePaste"&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste"&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;&lt;/div&gt;
&lt;p id="_mcePaste" style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;div&gt;&lt;/div&gt;
&lt;p style="text-align: justify; "&gt;A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency  and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;[1]. Automated decision making model where final decisions are made by a human operator&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[3]. A completely autonomous decision making model requiring no human involvement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[4]. https://futureoflife.org/ai-principles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-governance-a-report-of-the-roundtable-held-in-new-delhi&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Saman Goudarzi and Natallia Khaniejo</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-03T15:49:40Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-seminar-series">
    <title>CIS Seminar Series</title>
    <link>https://cis-india.org/internet-governance/blog/cis-seminar-series</link>
    <description>
        &lt;b&gt;The CIS seminar series will be a venue for researchers to share works-in-progress, exchange ideas, identify avenues for collaboration, and curate research. We also seek to mitigate the impact of Covid-19 on research exchange, and foster collaborations among researchers and academics from diverse geographies. Every quarter we will be hosting a remote seminar with presentations, discussions and debate on a thematic area. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The first seminar series was held on 7th and 8th October on the theme of &lt;a href="https://cis-india.org/internet-governance/blog/cis-seminar-series-information-disorder"&gt;‘Information Disorder: Mis-,  Dis- and Malinformation’&lt;/a&gt;,&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Theme for the Second Seminar (to be held online)&lt;/h3&gt;
&lt;h3&gt;Moderating Data, Moderating Lives:  Debating visions of (automated) content moderation in the contemporary&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (&lt;a href="https://www.zotero.org/google-docs/?u73Lwx"&gt;Gillespie, 2020)&lt;/a&gt; to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated &lt;a href="https://www.zotero.org/google-docs/?JHQ0rF"&gt;(Gillespie, 2020)&lt;/a&gt;, on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley &lt;a href="https://www.zotero.org/google-docs/?YLFnLm"&gt;(Morozov, 2013)&lt;/a&gt;, or “tyrannical” as they erode existing democratic measures &lt;a href="https://www.zotero.org/google-docs/?Ia8JYp"&gt;(Harari, 2018)&lt;/a&gt;. Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (&lt;a href="https://www.zotero.org/google-docs/?1Sa8vf"&gt;Siapera, 2022)&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” &lt;a href="https://www.zotero.org/google-docs/?bvx6mV"&gt;(Arora, 2016)&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.&lt;/li&gt;
&lt;li&gt;From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation &lt;/li&gt;
&lt;li&gt;Negotiating the link between content moderation, censorship and surveillance in the global south&lt;/li&gt;
&lt;li&gt;Whose values decide what is and is not harmful? &lt;/li&gt;
&lt;li&gt;Challenges of building moderation models for under resourced languages.&lt;/li&gt;
&lt;li&gt;Content moderation, algorithmic governance and social relations. &lt;/li&gt;
&lt;li&gt;Communicating algorithmic governance on social media to the not so “tech-savvy” among us.&lt;/li&gt;
&lt;li&gt;Speculative horizons of content moderation and the future of social relations on the internet. &lt;/li&gt;
&lt;li&gt;Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.&lt;/li&gt;
&lt;li&gt;Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.&lt;/li&gt;
&lt;li&gt;What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.&lt;/li&gt;
&lt;li&gt;Maybe we should not automate: Alternative, bottom-up approaches to content moderation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Seminar Format&lt;/h3&gt;
&lt;p&gt;We are happy to welcome abstracts for one of two tracks:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Working paper presentation&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Coffee-shop conversations&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.&lt;/p&gt;
&lt;p&gt;All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.&lt;/p&gt;
&lt;p&gt;Please send your abstracts to &lt;a href="mailto:workshops@cis-india.org"&gt;workshops@cis-india.org&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Timeline&lt;/h3&gt;
&lt;div id="_mcePaste"&gt;&lt;ol&gt;
&lt;li&gt;Abstract Submission Deadline: 18th April&lt;/li&gt;
&lt;li&gt;Results of Abstract review: 25th April&lt;/li&gt;
&lt;li&gt;Full submissions (of draft papers): 25th May&lt;/li&gt;
&lt;li&gt;Seminar date: Tentative 31st May&lt;/li&gt;
&lt;/ol&gt;&lt;/div&gt;
&lt;h3&gt;References&lt;/h3&gt;
&lt;p class="MsoNormal" style="text-align:justify; "&gt;&lt;span&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;International Journal of Communication&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;, &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;10&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;(0), 19.&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal" style="text-align:justify; "&gt;&lt;span&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;Gillespie, T. (2020). Content moderation, AI, and the question of scale. &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;Big Data &amp;amp; Society&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;, &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;7&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;(2), 2053951720943234. https://doi.org/10.1177/2053951720943234&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal" style="text-align:justify; "&gt;&lt;span&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;Harari, Y. N. (2018, August 30). &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;Why Technology Favors Tyranny&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p class="MsoNormal" style="text-align:justify; "&gt;&lt;span&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt;Morozov, E. (2013). &lt;/span&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;i&gt;&lt;span&gt;To save everything, click here: The folly of technological solutionism&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g"&gt;&lt;span&gt; (First edition). PublicAffairs.&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "&gt;Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. &lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "&gt;&lt;i&gt;International Journal of Bullying Prevention&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "&gt;, &lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "&gt;&lt;i&gt;4&lt;/i&gt;&lt;/a&gt;&lt;a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "&gt;(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-seminar-series'&gt;https://cis-india.org/internet-governance/blog/cis-seminar-series&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Cheshta Arora</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Machine Learning</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Seminar</dc:subject>
    

   <dc:date>2022-04-11T15:19:11Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation">
    <title>Artificial Intelligence for Growth: Leveraging AI and Robotics for India's Economic Transformation</title>
    <link>https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation</link>
    <description>
        &lt;b&gt;Amber Sinha took part in the second international conference organized by ASSOCHAM at Hotel Shangri-La in New Delhi on April 27, 2018.&lt;/b&gt;
        &lt;h3&gt;Keynote Address&lt;/h3&gt;
&lt;p&gt;12.15 p.m. - 12.30 p.m.: Shri Gopalakrishnan S., Joint Secretary, Ministry of Electronics and IT, Government of India&lt;/p&gt;
&lt;h3&gt;Special Address&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;12.30 p.m. - 12.45 p.m.: Dr. Pushpak Bhattacharyya, Director and Professor, Computer Science and Engg, IIT Patna and Chairman, BIS Committee for Standardisation in Artificial Intelligence&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Panel Discussion&lt;/h2&gt;
&lt;h3&gt;Session Moderator&lt;/h3&gt;
&lt;p&gt;12.45 p.m. - 1.40 p.m.: Shri Sudipta Ghosh, India                         Leader, Data and Analytics, PwC&lt;/p&gt;
&lt;h3&gt;Panelists&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Shri                           Amber Sinha, Senior Programme Manager, Centre                           for Internet and Society&lt;/li&gt;
&lt;li&gt;Shri                           Utpal Chakraborty, Lead Architect - AI,                           L&amp;amp;T Infotech &lt;/li&gt;
&lt;li&gt;Shri                           Atul Rai, CEO &amp;amp; Co-Founder, Staqu                           Technologies&lt;/li&gt;
&lt;li&gt;Shri                           Prabhat Manocha, IBM&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation'&gt;https://cis-india.org/internet-governance/news/artificial-intelligence-for-growth-leveraging-ai-and-robotics-for-indias-economic-transformation&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-05-05T09:08:07Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier">
    <title>Roundtable Discussion on “The Future of AI Policy in India” @ ICRIER</title>
    <link>https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier</link>
    <description>
        &lt;b&gt;Radhika Radhakrishnan, attended a Roundtable Discussion on “The Future of AI Policy in India” organized by the Indian Council for Research on International Economic Relations (ICRIER) in New Delhi on July 1, 2019,  to arrive at actionable recommendations for promotion of AI in India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika's inputs primarily focused on - capacity and skilling for AI adoption in India, sectoral opportunities for the adoption of AI, regulation of explanations for AI, fairness and bias in AI models, and actionable recommendations for government priorites for AI policies in India.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concept Note&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;India’s Artificial Intelligence moment is truly here and now. At a time when a diverse range of applications based on AI are being developed, pushing its frontier further into uncharted realms of business and society, Indian policy makers are contemplating not just AI’s potential for growth and social transformation, but also its proclivity to create divides and inequality. Our study attempts to understand the impacts of AI and trace the pathways to realizing it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI’s transformational potential stems from its ability to lend itself to a diverse range of applications across a range of sectors. One can witness AI based applications in traditional spheres of manufacturing, which are transforming quality control, production lines, and supply chain management, and in services, which are creating personalized product offerings and high-quality customer engagement. AI applications are also common in sectors such as agriculture that have taken a back seat in technological innovations in the post-industrial world. AI also demonstrates potential for impacting developmental challenges by responding to societies’ immediate demand for healthcare, education and expanding access to finance and banking.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The consequences of AI diffusion stem from AI’s pervasiveness across society, its ability to trigger innovation, and its tendencies to undergo transformation and evolution. These are typical characteristics of a class of technologies that can be found across history, the emergence and diffusion of which have enabled the wealth of nations. These are called General Purpose Technologies (GPT). Technologies such as steam engine, electricity, computers, semi-conductors, and more recently the Internet, can all be conceived as belonging to the GPT class of technologies. Our study is based on the understanding that the implications of AI can be best understood by viewing AI as a GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Historically, the economic impacts of GPTs have not been immediate but follow after its diffusion across the economy, i.e. over a period of time. There are two reasons that explain this phenomenon: firstly, in early phases of technology diffusion, an economy diverts part of its resources from productive activities to costly activities aimed at enabling the GPT. For instance, organizations adopting computers must also invest in training employees or hire computer scientists, re-arrange production activities or organizational structures to accommodate computer driven work-flows, all of which are costly economic activities. Secondly, it is only after the GPT is diffused and widely used in the economy that the statistics measuring GDP start counting and fully measuring the GPT.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Empirical research on GPTs such as AI, including ours, means confronting the challenge of measurement. Estimates on the economic impact of AI are bound to be imprecise because data on AI’s adoption is not available or adequately reflected in the data used to compute economic growth, at least not yet. Measuring the economic impact of AI is also difficult because of the magnitude of indirect effects on productivity that GPTs trigger. It is not therefore uncommon that studies on GPTs, while attempting to estimate their economic impacts, also engage in in-depth case studies and historical analysis of its impacts.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Our findings show unambiguous and positive impacts of AI on firm level productivity across sectors, although there is variation in the magnitude of positive impacts across sectors. We complement our findings with case studies that cover different firms that are developing AI based applications across a range of sectors to understand the underlying firm-level capabilities that drive innovations in AI based applications. Our study leads us towards high-level policy challenges facing organizations, civil society and government, and which when addressed enable the full realization of economic growth triggered by AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, our conclusions are a step-away from actionable policy recommendations. Given your experience with and within India’s AI based ecosystem, we invite you to deliberate and recommend insights and strategies that can help us arrive at concrete and practicable policy recommendations towards achieving a growth and welfare enhancing AI-based ecosystem in India.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Proposed Questions for Deliberation&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;In which sectors do we observe an immediate opportunity for the adoption of AI? What could be the nature of these applications?&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;In which areas of AI development and application is there an immediate opportunity for governments, industry and academia to collaborate?&lt;/li&gt;
&lt;li&gt;What should be the Government’s top five priorities in the next one year to catalyse the growth of AI in India?&lt;/li&gt;
&lt;li&gt;How and what agencies of the Government should be involved in implementation of India’s National AI mission?&lt;/li&gt;
&lt;li&gt;What aspects of the Government’s capacity requires enhancement to adapt to challenges of a growing Indian AI based ecosystem?&lt;/li&gt;
&lt;li&gt;What measures can the Government take to regulate for AI safety and ethical use of AI?&lt;/li&gt;
&lt;li&gt;What are the policy measures that the Government can undertake to safeguard against the consequences of AI based inequality?&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier'&gt;https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-10T01:46:36Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake">
    <title>Deepfakes: Algorithms at war, trust at stake</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake</link>
    <description>
        &lt;b&gt;A case in point is the video that surfaced of an Indian journalist not so long ago.&lt;/b&gt;
        &lt;p&gt;The article by Rajmohan Sudhakar was published in &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-on-the-move/deepfakes-algorithms-at-war-trust-at-stake-747042.html"&gt;Deccan Herald&lt;/a&gt; on July 14, 2019. Elonnai Hickok was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Now machines are learning to manipulate imagery. That is a real worry. Deepfakes for instance. They are AI-manipulated videos achieved by machine learning. Products of the humongous volume of images and videos now available online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The danger is, this imagery could be yours or mine. Imagine artificial intelligence of neural networks creating convincing identities of our real counterparts, and starts posting videos. Absurd.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Society has grappled with spurious and specious content in media over time. Media has been modified for various reasons, usually by those with access to significant resources and influence in the past,” says Elonnai Hickok, COO of the Bengaluru-based Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From an AI and machine learning perspective, deepfakes could be understood by what is known as GAN -- generative adversarial networks, essentially two algorithms at war. One is a generator, the other a discriminator. They compete with each other based on set inputs, in time bettering the version they together help create. These are behind what are now known as deepfakes of popular figures floating around online. Barack Obama is seen saying in a purported deepfake, “stay woke bitches”, which of course he did not say.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another deepfake has Mark Zuckerberg boasting: “I have total control of billions of people’s stolen data, all their secrets, their lives, their futures.” “Deepfakes are media modified by current technology and techniques. Easy availability of technology and media allows anyone to create, tailor or manipulate media for their own ends. Deepfakes present an opportunity for introspection and research into the contours of freedom of expression as well as societal frameworks for dealing with fake content,” explains Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the horrid instances of a deepfake-like attack was the video that surfaced of an Indian woman journalist not so long ago. Or the child-kidnapping rumours that spread through WhatsApp and the subsequent mob lynchings. However, there’s the view that in post-truth times, deepfakes would be seen with caution in the inherent dilemma over believing what one views online.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“In India, people do not take these so seriously, especially on social media. It is mostly entertainment for many. Now, we are seeing people with diametrically opposing views. They often view content which they like to see. It would rather work as a reinforcer of views than a transformer,” feels political analyst Sandeep Shastri.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Open source software can create basic deepfakes if someone wanted to hurt somebody. The potential scale of danger and damage looms larger for influential figures and nations at war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“While deep fakes can be used to damage societies, it is important that collectively society takes steps to become sensitised to ways that media can be used to manipulate opinions and choices, and allow people to develop skills that build awareness and context to what they see and believe,” adds Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A video emerged recently of an ‘Iranian’ boat near an attacked oil tanker in the Persian Gulf. Deepfake or not, the authenticity of the video was questionable. If used wily, it could have triggered a war.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hickok, society has to get more resilient to manipulation. “This includes spoken, written, seen as well as heard information. We have to learn to question the basis on which we confirm trust. Multiple forms of verification may help to address spurious media and information,” she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Deepfakes are no surprise as social media feed into the small and large divisions and differences of multitudes. Emergence of such potentially dangerous AIs isn’t taken quite seriously by the tech czars. In fact, it is a matter of economy for them.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Oscar Schwartz writes in The Guardian that ‘technological solutionism’ in the ‘attention economy’ may not be the real approach. “And herein lies the problem: by formulating deepfakes as a technological problem, we allow social media platforms to promote technological solutions to those problems – cleverly distracting the public from the idea that there may be more fundamental problems with powerful Silicon Valley tech platforms,” Schwartz warns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The measures do not fall on the regulators alone. I think, individuals (by introspection and building awareness), society (through education), the legal system (stringent evidentiary requirements and capacity building) industry (differentiating recreational and prejudicial content, tagging content that is manipulated, etc.) and regulators (enabling accountability, oversight, transparency and redress) can all contribute to a more resilient society,” observes Hickok.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, viewing a video is still considered close to truth, almost sacred by the vast majority. Necessarily, it would not require a technologically advanced deepfake, especially in the backward rural pockets, to rile up and aggravate biases and prejudices.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Deepfakes can further existing biases and manipulate opinions and choices. They can disrupt trust inherent in societal groups to co-exist and politically, they can breed distrust in leadership and capability. That said, deepfakes can be used for humour and satire. Ultimately, the impact will be shaped by a number of factors including pre-existing biases, individual response, etc.,” Hickok elaborates.&lt;/p&gt;
&lt;p&gt;On a lighter note, deepfakes could be helpful too. We could very well do away with some of our television news presenters.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake'&gt;https://cis-india.org/internet-governance/news/deccan-herald-july-14-2019-rajmohan-sudhakar-deepfakes-algorithms-at-war-trust-at-stake&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rajmohan Sudhakar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:42:12Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance">
    <title>Emergence of Chinese Technology:Rising stakes for innovation, competition and governance</title>
    <link>https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance</link>
    <description>
        &lt;b&gt;Omidyar Network in partnership with the Esya Centre organized a private discussion on the theme “Emergence of Chinese technology - rising stakes for innovation, competition and governance” on Monday, 12 August 2019 in New Delhi. Arindrajit Basu attended the event. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;China Ascendant: Soft Power report by ON focuses on three prongs of power-digital power, fore power and sharp power. Standards have been a major avenue for proliferation of Chinese competition.This is combined with knowledge transfer as 2.8 million Chinese students in the US have largely returned to tech companies in China. Core strength is still not in basic research so by 2020, aiming for 15 per cent of PhD.s to be in basic research. China uses nudges in shaping global governance outcomes by targeting the right stakeholders as opposed to altering the ground rules entirely,  Universities in China have focused on how cultural connections can be linked upto negotiating prowess at multilateral fora.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;China takes a whole of government approach to technology innovation. Continues to be consumer focused.&lt;/li&gt;
&lt;li&gt;China does not look at India as a R+D partner,more as a market.Stability and unpredictability has been an issue.None of India's tech policies were drafted with China in mind.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance'&gt;https://cis-india.org/internet-governance/news/emergence-of-chinese-technology-rising-stakes-for-innovation-competition-and-governance&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-19T14:03:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy">
    <title>Policy Lab on Artificial Intelligence &amp; Democracy</title>
    <link>https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy</link>
    <description>
        &lt;b&gt;Shweta Mohandas participated in a policy lab on Artificial Intelligence &amp; Democracy in India organised by Tandem Research, in partnership with Microsoft Research and Friedrich-Ebert-Stiftung on 2 &amp; 3 April, 2019, in Bangalore.
&lt;/b&gt;
        
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy'&gt;https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-04-12T01:32:32Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
