<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 11 to 25.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/responsible-ai-workshop"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-in-healthcare"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light">
    <title>Insult to Kannada shows Google AI in a poor light</title>
    <link>https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</link>
    <description>
        &lt;b&gt;A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
&lt;/b&gt;
        &lt;p&gt;The article by Krupa Joseph was &lt;a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html"&gt;published in Deccan Herald&lt;/a&gt; on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Fallible process&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Periodic checks&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Who was upto mischief?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Furore and after&lt;/h3&gt;
&lt;p&gt;‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”&lt;/p&gt;
&lt;p&gt;Google removed the answer and issued a statement:&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'&gt;https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Krupa Joseph</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-06-26T05:25:38Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data">
    <title>The Wolf in Sheep's Clothing: Demanding your Data</title>
    <link>https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data</link>
    <description>
        &lt;b&gt;The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence (AI) and Machine Learning (ML) has given rise to transformational business models across several sectors.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This piece was originally published in &lt;a class="external-link" href="https://telecom.economictimes.indiatimes.com/tele-talk/the-wolf-in-sheep-s-clothing-demanding-your-data/4497"&gt;The Economic Times Telecom&lt;/a&gt;, on 8 September, 2020.&lt;span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"&gt;&lt;/span&gt;&lt;/p&gt;
&amp;nbsp;
&lt;p&gt;The increasing digitalization of the economy and ubiquity of the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/internet"&gt;Internet&lt;/a&gt;, coupled with developments in &lt;a href="https://telecom.economictimes.indiatimes.com/tag/artificial+intelligence"&gt;Artificial Intelligence&lt;/a&gt;
 (AI) and Machine Learning (ML) has given rise to transformational 
business models across several sectors. These developments have changed 
the very structure of existing sectors, with a few dominant firms 
straddling across many sectors. The position of these firms is 
entrenched due to the large amounts of data they have, and usage of 
sophisticated algorithms that deliver very targeted service/content and 
their global nature.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Such data based network businesses 
are generally multi-sided platforms subject to network effects and 
winner takes all phenomena, often, making traditional competition 
regulation inappropriate. In addition, there has been concern that such 
companies hurt competition as they are owners of large amounts of data 
collected globally, the very basis on which new services are predicated.
 Also since users have an inertia to share their data on multiple 
platforms, new companies find it very challenging to emerge. Several of 
the large companies are of US origin. Several regions/countries such as 
EU, UK, India are concerned that while these companies benefit from the 
data of their citizens or their &lt;a href="https://telecom.economictimes.indiatimes.com/tag/devices"&gt;devices&lt;/a&gt;,
 SMEs and other companies in their own countries find it increasingly 
difficult to remain viable or achieve scale. With the objective of 
supporting enterprises, including SMEs in their own countries, Europe, 
UK India are in different stages of data regulation initiatives.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;In India, the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/personal+data+protection"&gt;Personal Data Protection&lt;/a&gt;
 (PDP) Bill, 2019 deals with the framework for collecting, managing and 
transferring of Personal Data of Indian citizens, including mandating 
sharing of anonymized data of individuals and non-personal data for 
better targeting of services or policy making. In addition, the Report 
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up 
with a Framework for Regulating NPD. Since the NPD Report is a more 
recent phenomenon, this articles analyzes some aspects of it.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;According
 to CoE, non-personal data could be of two types. First, data or 
information which was never about an individual (e.g. weather data). 
Second, data or information that once was related to an individual (e.g.
 mobile number) but has now ceased to be identifiable due to the removal
 of certain identifiers through the process of ‘anonymisation’. However,
 it may be possible to recover the personal data from such anonymized 
data and therefore, the distinction between personal and non-personal is
 not clean. In any case, the PDP bill 2019 deals with personal data. If 
the CoE felt that some aspect of personal data (including anonymized 
data) were not adequately dealt with, it should work to strengthen it. 
The current approach of the CoE is bound to create confusion and 
overlapping jurisdiction. Since anonymized data is required to be 
shared, there are disincentives to anonymization, causing greater risk 
to individual privacy.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;A new class of business based on a “&lt;em&gt;horizontal classification cutting across different industry sectors&lt;/em&gt;” is defined. This refers to any business that derives “&lt;em&gt;new or additional economic value from data, by collecting, storing, processing, and managing data&lt;/em&gt;”
 based on a certain threshold of data collected/processed that will be 
defined by the regulatory authority that is outlined in the report. The 
CoE also recommends that “&lt;em&gt;Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data&lt;/em&gt;” without any remuneration. Further, “&lt;em&gt;By
 looking at the meta-data, potential users may identify opportunities 
for combining data from multiple Data Businesses and/or governments to 
develop innovative solutions, products and services. Subsequently, data 
requests may be made for the detailed underlying data&lt;/em&gt;”.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;With
 increasing digitalization, today almost every business is a data 
business. The problem in such categorization will be with the definition
 of thresholds. It is likely that even a small video sharing app or an 
AR/VR app would store/collect/process/transmit more data than say a 
mid-sized bank in terms of data volumes. Further, with increasing 
embedding of &lt;a href="https://telecom.economictimes.indiatimes.com/tag/iot"&gt;IoT&lt;/a&gt;
 in various aspects of our lives and businesses (smart manufacturing, 
logistics, banking etc), the amount of data that is captured by even 
small entities can be huge.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;The private sector, driven by
 profitability, identifies innovative business models, risks capital and
 finds unique ways of capturing and melding different data sets. In 
order to sustain economic growth, such innovation is necessary. The 
private sector would also like legal protection over these aspects of 
its businesses, including the unique IPR that may be embedded in the 
processing of data or its business processes. But mandating such onerous
 requirements on sharing by the CoE is going to kill any private 
initiative. Any regulatory regime must balance between the need to 
provide a secure environment for protecting data of incumbents and 
making it available to SMEs/businesses.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Meta data 
provides insights to the company’s databases and processes. These are 
source of competitive advantage for any company. Meta data is not 
without a context. The basis of demanding such disclosure is mandated 
with the proposed NPD Regulator who would evaluate such a purpose. In 
practice, purposes are open to interpretation and the structure of 
appeal mechanism etc is going to stall any such sharing. Would such 
mandates of sharing not interfere with the existing Intellectual 
Property Rights? Or the freedom to contract? Any innovation could easily
 be made available to a competitor that front-ends itself with a 
start-up. To mandate making such data available would not be fair. 
Further, how would the NPD regulator even ensure that such data is used 
for the purpose (which the proposed regulator is supposed to evaluate) 
that it is sought for? In Europe, where such &lt;a href="https://telecom.economictimes.indiatimes.com/tag/data+sharing"&gt;data sharing&lt;/a&gt;
 mandates are being considered, the focus is on public data. For private
 entities, the sharing is largely based on voluntary contributions. 
Compulsory sharing is mandated only under restricted situations where 
market failure situations are not addressed through Competition Act and 
provided legitimate interest of the data holder and existing legal 
provisions are taken into account.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further, the 
compliance requirements for such Data Businesses is very onerous and 
makes a mockery of “minimum government” framework of the government. The
 CoE recommends that all Data Businesses, whether government NGO, or 
private “&lt;em&gt;to disclose data elements collected, stored and processed, and data-based services offered&lt;/em&gt;”. As if this was not enough, the CoE further recommends that “&lt;em&gt;Every
 Data Business must declare what they do and what data they collect, 
process and use, in which manner, and for what purposes (like disclosure
 of data elements collected, where data is stored, standards adopted to 
store and secure data, nature of data processing and data services 
provided). This is similar to disclosures required by pharma industry 
and in food products&lt;/em&gt;”. Such disclosures are necessary in these 
industries as the companies in this sector deal with critical aspects of
 human life. But are such requirements necessary for all activities and 
businesses? As long as organizations collect and process data, in a 
legal manner, within the sectoral regulation, why should such 
information have to be “reported”? Further, such bureaucratic processes 
and reporting requirements are only going to be a burden to existing 
legitimate businesses and give rise to a thriving regulatory license 
raj.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further questions that arise are: How is any 
compliance agency going to make sure that all the underlying metadata is
 made available in a timely manner? As companies respond to a dynamic 
environment, their analysis and analytical tools change and so does the 
metadata. This inherent aspect of businesses raises the question: At 
what point in time should companies make their meta-data available? How 
will the compliance be monitored?&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Conclusion: The CoE 
needs to create an enabling and facilitating an environment for data 
sharing. The incentives for different types of entities to participate 
and contribute must be recognized. Adequate provisions for risks and 
liabilities arising out data sharing need to be thought through. 
National initiatives on data sharing should not create an onerous 
reporting regime, as envisaged by the CoE, even if digital.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p class="article-disclaimer"&gt;&lt;em&gt;DISCLAIMER:
 The views expressed are solely of the author and ETTelecom.com does not
 necessarily subscribe to it. ETTelecom.com shall not be responsible for
 any damage caused to any person/organisation directly or indirectly.&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data'&gt;https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rekha Jain</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-11-10T17:44:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall">
    <title> Comments on NITI AAYOG Working Document: Towards Responsible #AIforAll</title>
    <link>https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall</link>
    <description>
        &lt;b&gt;The NITI Aayog Working Document on Responsible AI for All released on 21st July 2020 serves as a significant statement of intent from NITI Aayog, acknowledging the need to ensure that any conception of “Responsible AI” must fulfill constitutional responsibilities, incorporated through workable principles. However, as it is a draft document for discussion, it is important to highlight next steps for research and policy levers to build upon this report.&lt;/b&gt;
        
&lt;div&gt;&amp;nbsp;&lt;/div&gt;
&lt;div&gt;Read our comments in their entirety &lt;a href="https://cis-india.org/internet-governance/comments-to-aiforall-pdf" class="internal-link" title="Comments to AIForAll pdf"&gt;here&lt;/a&gt;.&lt;/div&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall'&gt;https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas, Arindrajit Basu and Ambika Tandon</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-18T06:25:18Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency">
    <title>Towards Algorithmic Transparency</title>
    <link>https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency</link>
    <description>
        &lt;b&gt;This policy brief examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This brief proposes a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggests solutions such as the incorporation of audits, and ex ante approaches to building interpretable models right from the design stage. Read the full report &lt;a href="https://cis-india.org/internet-governance/algorithmic-transparency-pdf" class="internal-link" title="Algorithmic Transparency PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The Regulatory Practices Lab at CIS aims to produce regulatory policy 
suggestions focused on India, but with global application, in an agile 
and targeted manner and to promote transparency around practices 
affecting digital rights. &lt;br /&gt;The Regulatory Practices Lab is supported by Google and Facebook.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency'&gt;https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Radhika Radhakrishnan, and Amber Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Algorithms</dc:subject>
    
    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Transparency</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-07-15T13:16:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines">
    <title>Ethics and Human Rights Guidelines for Big Data for Development Research</title>
    <link>https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines</link>
    <description>
        &lt;b&gt;This is a four-part review of guideline documents for ethics and human rights in big data for development research. This research was produced as part of the Big Data for Development network supported by International Development Research Centre, Canada&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Part #1 - Review of Principles of Ethics in Biomedical Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/biomedicalscience" class="internal-link" title="CIS_BD4D_Guideline01_MS+AS_BiomedicalScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #2 - Review of Principles of Ethics in Computer Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/computerscience" class="internal-link" title="CIS_BD4D_Guideline02_RS+AS_ComputerScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #3 - Summary of Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/AIEthicsReview" class="internal-link" title="CIS_BD4D_Guideline03_AS+PT_BigDataAIEthicsReview_SummaryNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #4 - Extended Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/ExtendedNotes" class="internal-link" title="CIS_BD4D_Guideline04_PT+PB_BigDataAIEthicsReview_ExtendedNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The rapid expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “big data”; though there is no single agreed upon definition of the term. Big data promises to provide new insights and solutions across a wide range of sectors. Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. The predecessor disciplines of data science such as computer sciences, applied mathematics, and statistics have traditionally managed to stay out of the scope of ethical frameworks, based on the assumption that they do not involve humans as subject of their research. While critical study into big data is still in its infancy, there is a growing belief that there are significant discontinuities between the rapid growth in big data and the ethical framework that exists to govern its use. In this set of documents, we look at them in detail.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines'&gt;https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Manjri Singh, Rajashri Seal, Pranav Bhaskar Tiwari, Pranav M Bidare</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>BD4D</dc:subject>
    
    
        <dc:subject>RAW Research</dc:subject>
    
    
        <dc:subject>Big Data for Development</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-05-20T07:56:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report">
    <title>Panelist at launch of Google-UNESCAP AI Report</title>
    <link>https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report</link>
    <description>
        &lt;b&gt;Arindrajit Basu was a speaker at the panel launching the Google-UNESCAP AI Report at the GovInsider Forum held at the United Nations Convention Centre in Bangkok on October 16, 2019. &lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/launch-the-ai-report"&gt;view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report'&gt;https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-11-02T06:48:25Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future">
    <title>Farming the Future: Deployment of Artificial Intelligence in the agricultural sector in India</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future</link>
    <description>
        &lt;b&gt;This case study was published as a chapter in the joint UNESCAP-Google publication titled Artificial Intelligence in Public Service Delivery. The chapter in its final form would not have been possible without the efforts and very useful interventions by our colleagues at Digital Asia Hub,Google, and UNESCAP.&lt;/b&gt;
        &lt;p&gt;&lt;img src="https://cis-india.org/home-images/Findings.jpg" alt="Findings" class="image-inline" title="Findings" /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although agriculture is a critical sector for India’s economic development, it continues to face many challenges including a lack of &lt;span&gt;modernization of agricultural methods, fragmented landholdings, erratic rainfalls, overuse of groundwater and a lack of access to &lt;/span&gt;&lt;span&gt;information on weather, markets and pricing. As state governments create policies and frameworks to mitigate these challenges, the &lt;/span&gt;&lt;span&gt;role of technology has often come up as a potential driver of positive change.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Farmers in the southern Indian states of Karnataka and Andhra Pradesh are facing significant challenges. For hundreds of years,these farmers have relied on traditional agricultural methods to make sowing and harvesting decisions, but now volatile weather patterns and shifting monsoon seasons are making such ancient wisdom obsolete. Farmers are unable to predict weather patterns or crop yields accurately, making it difficult for them to make informed financial and operational decisions associated with planting and harvesting. Erratic weather patterns particularly affect those farmers who reside in remote areas, cut off from meaningful accessto infrastructure and information. In addition to a lack of vital weather information, farmers may lack information about market conditions and may then sell their crops to intermediaries at below-market prices.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Against this backdrop, the state governments and local partners in southern India teamed up with Microsoft to develop predictive AI services to help smallholder farmers to improve their crop yields and give them greater price control. Since 2016 three applications have been developed and applied for use in these communities, two of which are discussed in this case study: the AI-sowing app and the price forecasting model.&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf"&gt;Click to read&lt;/a&gt; the report here.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Arindrajit Basu, Siddharth Sonkar and Pranav M B</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-16T13:41:02Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art">
    <title>AI Opera- AI as a total work of art</title>
    <link>https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</link>
    <description>
        &lt;b&gt;On October 11, 2019,  Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, &lt;a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&amp;amp;event_id=21670394"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'&gt;https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T14:30:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision">
    <title>We need a better AI vision</title>
    <link>https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision</link>
    <description>
        &lt;b&gt;Artificial intelligence conjures up a wondrous world of autonomous processes but dystopia is inevitable unless rights and privacy are protected.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Arindrajit Basu was published by&lt;a class="external-link" href="https://fountainink.in/essay/we-need-a-better-ai-vision-"&gt; Fountainink&lt;/a&gt; on October 12, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;he dawn of Artificial Intelligence (AI) has policy-makers across the globe excited. In India, it is seen as a tool to overleap structural hurdles and better understand a range of organisational and management processes while improving the implementation of several government tasks. Notwithstanding the apparent enthusiasm in the government and private sectors, an adequate technological, infrastructural, and financial capacity to develop these models at scale is still in the works.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A number of policy documents with direct or indirect references to India’s AI future—to be powered by vast troves of data—have been released in the past year and a half. These include the National Strategy for Artificial Intelligence (which I will refer to as National Strategy) authored by NITI Aayog, the AI Taskforce Report, Chapter 4 of the Economic Survey, the Draft e-Commerce Bill and the Srikrishna Committee Report.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While they extol the virtues of data-driven analytics, references to the preservation or augmentation of India’s constitutional ethos through AI has been limited though it is crucial for safeguarding the rights and liberties of citizens while paving the way for the alleviation of societal oppression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this essay, I outline the variety of AI use cases that are in the works. I then highlight India’s AI vision by culling the relevant aspects of policy instruments that impact the AI ecosystem and identify lacunae that can be rectified. Finally, I attempt to “constitutionalise AI policy” by grounding it in a framework of constitutional rights that guarantee protection to the most vulnerable sections of society.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in electronics, heavy electricals and automobiles.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;It is crucial to note that these cases, still emerging in India, have been implemented at scale in other countries such as the United Kingdom, United States and China. Projects were rolled out to the detriment of ethical and legal considerations. Hindsight should make the Indian policy ecosystem much wiser. By closely studying the research produced in these diverse contexts, Indian policy-makers should try to find ways around the ethical and legal challenges that cropped up elsewhere and devise policy solutions that mitigate the concerns raised.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;B&lt;span&gt;efore anything else we need to define AI—an endeavour fraught with multiple contestations. My colleagues and I at the Centre for Internet &amp;amp; Society ducked this hurdle when conducting our research by adopting a function-based approach. An AI system (as opposed to one that automates routine, cognitive or non-cognitive tasks) is a dynamic learning system that allows for the delegation of some level of human decision-making to the system. This definition allows us to capture some of the unique challenges and prospects that stem from the use of AI.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The research I contributed to at CIS identified key trends in the use of AI across India. In healthcare, it is used for descriptive and predictive purposes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For example, the Manipal Group of Hospitals tied up with IBM’s Watson for Oncology to aid doctors in the diagnosis and treatment of seven types of cancer. It is also being used for analytical or diagnostic services. Niramai Health Analytix uses AI to detect early stage breast cancer and Adveniot Tecnosys detects tuberculosis through chest X-rays and acute infections using ultrasound images. In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in the electronics, heavy electricals and automobiles sector gradually adopting and integrating AI solutions into their products and processes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is also used in the burgeoning online lending segment in order to source credit score data. As many Indians have no credit scores, AI is used to aggregate data and generate scores for more than 80 per cent of the population who have no credit scores. This includes Credit Vidya, a Hyderabad-based data underwriting start-up that provides a credit score to first time loan-seekers and feeds this information to big players such as ICICI Bank and HDFC Bank, among others. It is also used by players such as Mastercard for fraud detection and risk management. In the finance world, companies such as Trade Rays are being used to provide user-friendly algorithmic trading services.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The next big development is in law enforcement. Predictive policing is making great strides in various states, including Delhi, Punjab, Uttar Pradesh and Maharashtra. A brainchild of the Los Angeles Police Department, predictive policing is the use of analytical techniques such as Machine Learning to identify probable targets for intervention to prevent crime or to solve past crime through statistical predictions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Conventional approaches to predictive policing start with the mapping of locations where crimes are concentrated (hot spots) by using algorithms to analyse aggregated data sets. Police in Uttar Pradesh and Delhi have partnered with the Indian Space Research Organisation (ISRO) in a Memorandum of Understanding to allow ISRO’s Advanced Data Processing Research Institute to map, visualise and compile reports about crime-related incidents.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are aggressive developments also on the facial recognition front. Punjab Police, in association with Gurugram-based start-up Staqu has started implementing the Punjab Artificial Intelligence System (PAIS) which uses digitised criminal records and automated facial recognition to retrieve information on the suspected criminal. At the national level, on June 28, the National Crime Records Bureau (NCRB) called for tenders to implement a centralised Automated Facial Recognition System (AFRS), defining the scope of work in broad terms as the “supply, installation and commissioning of hardware and software at NCRB.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring. The Andhra Pradesh government had started collecting information from a range of databases and processes the information through Microsoft’s Machine Learning Platform to monitor children and devote student focussed attention on identifying and curbing school drop-outs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In Andhra Pradesh, Microsoft collaborated with the International Crop Institute for Semi-Arid Tropics (ICRISAT) to develop an AI Sowing App powered by Microsoft’s Cortana Intelligence Suite. It aggregated data using Machine Learning and sent advisories to farmers regarding optimal dates to sow. This was done via text messages on feature phones after ground research revealed that not many farmers owned or were able to use smart phones. The NITI Aayog AI Strategy specifically cited this use case and reported that this resulted in a 10-30 per cent increase in crop yield. The government of Karnataka has entered into a similar arrangement with Microsoft.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, in the defence sector, our research found enthusiasm for AI in intelligence, surveillance and reconnaissance (ISR) functions, cyber defence, robot soldiers, risk terrain analysis and moving towards autonomous weapons systems. These projects are being developed by the Defence Research and Development Organisation but the level of trust and support in AI-driven processes reposed by the wings of the armed forces is yet to be publicly clarified. India also had the privilege of leading the global debate on Lethal Autonomous Weapons Systems (LAWS) with Amandeep Singh Gill chairing the United Nations Group of Governmental Experts (UN-GGE) on the issue. However, ‘lethal’ autonomous weapons systems at this stage appear to be a speck in the distant horizon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A&lt;span&gt;long with the range of use cases described above, a patchwork of policy imperatives is emerging to support this ecosystem. The umbrella document is the National Strategy for Artificial Intelligence published by the NITI Aayog in June 2018. Despite certain lacunae in its scope, the existence of a cohesive and robust document that lends a semblance of certainty and predictability to a rapidly emerging sphere is in itself a boon. The document focuses on how India can leverage AI for both economic growth and social inclusion. The contents of the document can be divided into a few themes, many of which have also found their way into multiple other instruments.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;NITI Aayog provides over 30 policy recommendations on investment in scientific research, reskilling, training and enabling the speedy adoption of AI across value chains. The flagship research initiative is a two-tiered endeavour to boost AI research in India. First, new centres of research excellence (COREs) will develop fundamental research. The COREs will act as feeders for international centres for transformational AI which will focus on creating AI-based applications across sectors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/AIinCountries.jpg/@@images/16b4af34-cb6d-423c-be35-e45a60d501cf.jpeg" alt="AI in Countries" class="image-inline" title="AI in Countries" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is an impressive theoretical objective but questions surrounding implementation and structures of operation remain to be answered. China has not only conceptualised an ecosystem but through the Three Year Action Plan to Promote the Development of New Generation Artificial Intelligence Industry, it has also taken a whole-of-government approach to propelling the private sector to an e-leadership position. It has partnered with national tech companies and set clear goals for funding, such as the $2.1 billion technology park for AI research in Beijing.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The contents of the NITI document can be divided into a few themes, many of which have also found their way into multiple other instruments. First, it proposes an “AI+X” approach that captures the long-term vision for AI in India. Instead of replacing the processes in their entirety, AI is understood as an enabler of efficiency in processes that already exist. NITI Aayog therefore looks at the process of deploying AI-driven technologies as taking an existing process (X) and adding AI to them (AI+X). This is a crucial recommendation all AI projects should heed. Instead of waving AI as an all-encompassing magic wand across sectors, it is necessary to identify specific gaps AI can seek to remedy and then devise the process underpinning this implementation.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The AI-driven intervention to develop sowing apps for farmers in Karnataka and Andhra Pradesh are examples of effective implementation of this approach. Instead of other knee-jerk reactions to agrarian woes such as a hasty raising of Minimum Support Price, effective research was done in this use-case to identify a lack of predictability in weather patterns as a key factor in productive crop yields. They realised that aggregation of data through AI could provide farmers with better information on weather patterns. As internet penetration was relatively low in rural Karnataka, text messages to feature phones that had a far wider presence was indispensable to the end game.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;his is in contrast to the ill-conceived path adopted by the Union ministry of electronics and information technology in guidelines for regulating social media platforms that host content (“intermediaries”). Rule 3(9) of the Draft of the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018 mandates intermediaries to use “automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Proposed in light of the fake news menace and the unbridled spread of “extremist” content online, the use of the phrase “automated tools or appropriate mechanisms” is reflective of an attitude that fails to consider ground realities that confront companies and users alike. They ignore, for instance, the cost of automated tools: whether automated content moderation techniques developed in the West can be applied to Indic languages or grievance redress mechanisms users can avail of if their online speech is unduly restricted. This is thus a clear case of the “AI” mantra being drawn out of a hat without studying the “X” it is supposed to remedy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second focus of the National Strategy that has since morphed into a technology policy mainstay across instruments is on data governance, access and utilisation. The document says the major hurdle to the large scale adoption of AI in India is the difficulty in accessing structured data. It recommends developing big annotated data sets to “democratise data and multi-stakeholder marketplaces across the AI value chain”. It argues that at present only one per cent of data can be analysed as it exists in various unconnected silos. Through the creation of a formal market for data, aggregators such as diagnostic centres in the healthcare sector would curate datasets and place them in the market, with appropriate permissions and safeguards. AI firms could use available datasets rather than wasting effort sourcing and curating the sets themselves.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.The first is “community data” and appears both in the Srikrishna Report that accompanied the draft Data Protection Bill in 2018 and the draft e-commerce policy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But there appears to be some conflict between its usage in the two. Srikrishna endorses a collective protection of privacy by protecting an identifiable community that has contributed to community data. This requires the fulfilment of three key conditions: &lt;i&gt;first,&lt;/i&gt; the data belong to an identifiable community; &lt;i&gt;second, &lt;/i&gt;individuals in the community consent to being a part of it, and &lt;i&gt;third&lt;/i&gt;, the community as a whole consents to its data being treated as community data. On the other hand, the Department of Promotion of Industry and Internal Trade’s (DPIIT) draft e-commerce policy looks at community data as “societal commons” or a “national resource” that gives the community the right to access it but government has ultimate and overriding control of the data. This configuration of community data brings into question the consent framework in the Srikrishna Bill.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well-intentioned but is fraught with core problems in implementation.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The matter is further confused by treating “data as a public good”. This is projected in Chapter 4 of the 2019 Economic Survey published by the Ministry of Finance. It explicitly states that any configuration needs to be deferential to privacy norms and the upcoming privacy law. The “personal data” of an individual in the custody of a government is also a “public good” once the datasets are anonymised. At the same time, it pushes for the creation of a government database that links several individual databases, which leads to the “triangulation” problem, where matching different datasets together allows for individuals to be identified despite their anonymisation in seemingly disparate databases.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Building an AI ecosystem” was also one of the ostensible reasons for data localisation—the government’s gambit to mandate that foreign companies store the data of Indian citizens within national borders. In addition to a few other policy instruments with similar mandates, Section 40 of the Draft Personal Data Protection Bill mandates that all “critical data” (this is to be notified by the government) be stored exclusively in India. All other data should have a live, serving copy stored in India even if transfer abroad is allowed. This was an attempt to ensure foreign data processors are not the sole beneficiaries of AI-driven insights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well intentioned but is fraught with core problems in implementation. First, the notion of data as a national resource or as a public good walks a tightrope with constitutionally guaranteed protections around privacy, which will be codified in the upcoming Personal Data Protection Bill. My concerns are not quite so grave in the case of genuine “public data” like traffic signal data or pollution data. However, the Economic Survey manages to crudely amalgamate personal data into the mix.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It also states that personal data in the custody of a government is a public good once the datasets are anonymised. This includes transactions data in the User Payments Interface (UPI), administrative data including birth and death records, and institutional data including data in public hospitals or schools on pupils or patients. At the same time, it pushes for a government database that will lead to the triangulation problem outlined above. The chapter also suggests that said data may be sold to private firms (unclear if this includes foreign or domestic firms). This not only contradicts the notion of public good but is also a serious threat to the confidentiality and security of personal data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;herefore, along with the concerted endeavour to create data marketplaces, it is crucial for policy-makers to differentiate between public data and personal data individuals may consent to be made public. The parameters for clearly defining free and informed consent, as codified in the Draft Personal Data Protection Bill need to be strictly followed as there is a risk of de-anonymisation of data once it finds its way into the marketplace. Second, it is crucial for policy-makers to define clearly a community and parameters for what constitutes individual consent to be part of a community. Finally, along with technical work on setting up a national data marketplace, there must be protracted efforts to guarantee greater security and standards of anonymisation.&lt;/span&gt;&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The National Strategy  mentions that India should position itself as a “garage” for AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their rights.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Assuming that a constitutionally valid paradigm may be created, the excessive focus on data access by tech players dodges the question of the capabilities of analytic firms to process this data and derive meaningful insights from the information. Scholars on China, arguably the poster-child of data-driven economic growth, have sent mixed messages. Ding argues that despite having half the technical capabilities of the US, easy access to data gives China a competitive edge in global AI competition. On the contrary, Andrew Ng has argued that operationalising a sufficient number of relevant datasets still remains a challenge. Ng’s views are backed up by insiders at Chinese tech giant Tencent who say the company still finds it difficult to integrate data streams due to technical hurdles. NITI Aayog’s idea of a multi-stream data marketplace may theoretically be a solution to these potential hurdles but requires sustained funding and research innovation to be converted into reality.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy suggests that government should create a multi-disciplinary committee to set up this marketplace and explore levers for its implementation. This is certainly the need of the hour. It also rightly highlights the importance of research partnerships between academia and the private sector, and the need to support start-ups. There is therefore an urgent need for innovative allied policy instruments that support the burgeoning start-up sector. Proposals such as data localisation may hurt smaller players as they will have to bear the increased fixed costs of setting up or renting data centres.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy also incongruously mentions that India should position itself as a “garage” for the use of AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their fundamental rights. It could also imply that India should occupy a leadership position and work with other emerging economies to frame the global rights based discourse to seek equitable solutions for the application of AI that works to improve the plight of the most vulnerable in society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;O&lt;span&gt;ur constitutional ethos places us in a unique position to develop a framework that enables the actualisation of this equitable vision—a goal the policy instruments put out thus far appear to have missed. While the National Strategy includes a section on privacy, security and ethical implications of AI, it stops short of rooting it in fundamental rights and constitutional principles. As a centralised policy instrument, the National Strategy deserves praise for identifying key levers in the future of India’s AI ecosystem and, with the exception of the concerns I outlined above, it is at par with the policy-making thought process in any other nation.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When we start the process of using constitutional principles for AI governance, we must remember that as per Article 12, an individual can file a writ against the state for violation of a fundamental right if the action is taken under the aegis of a “public function”. To combat discrimination by private actors, the state can enact legislation compelling private actors to comply with constitutional mandates. In July, Rajeev Chandrashekhar, a Rajya Sabha MP, suggested a law to combat algorithmic discrimination along the lines of the Algorithmic Accountability Bill proposed in the US Senate. There are three core constitutional questions along the lines of the “golden triangle” of the Indian Constitution any such legislation will need to answer—those of accountability and transparency, algorithmic discrimination and the guarantee of freedom of expression and individual privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Algorithms are developed by human beings who have their own cognitive biases. This means ostensibly neutral algorithms can have an unintentional disparate impact on certain, often traditionally disenfranchised groups.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the &lt;i&gt;MIT Technology Review&lt;/i&gt;, Karen Hao explains three stages at which bias might creep in. The first stage is the framing of the problem itself. As soon as computer scientists create a deep-learning model, they decide what they want the model to finally achieve. However, frequently desired outcomes such as “profitability”, “creditworthiness” or “recruitability” are subjective and imprecise concepts subject to human cognitive bias. This makes it difficult to devise screening algorithms that fairly portray society and the complex medley of identities, attributes and structures of power that define it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second stage Hao mentions is the data collection phase. Training data could lead to bias if it is unrepresentative of reality or represents entrenched prejudice or structural inequality. For example, most Natural Language Processing systems used for Parts of Speech (POS) tagging in the US are trained on the readily available data sets from the &lt;i&gt;Wall Street Journal&lt;/i&gt;. Accuracy would naturally decrease when the algorithm is applied to individuals—largely ethnic minorities—who do not mimic the speech of the &lt;i&gt;Journal&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hao, the final stage for algorithmic bias is data preparation, which involves selecting parameters the developer wants the algorithm to consider. For example, when determining the “risk-profile” of car owners seeking insurance premiums, geographical location could be one parameter. This could be justified by the ostensibly neutral argument that those residing in inner-city areas with narrower roads are more likely to have scratches on their vehicles. But as inner cities in the US have a disproportionately high number of ethnic minorities or other vulnerable socio-economic groups, “pin code” becomes a facially neutral proxy for race or class-based discrimination.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;he right to equality has been carved into multiple international human rights instruments and into the Equality Code in Articles 14-18 of the Indian Constitution. The dominant approach to interpreting the right to equality by the Supreme Court has been to focus on “grounds” of discrimination under Article 15(1), thus resulting in a lack of recognition of unintentional discrimination and disparate impact.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A notable exception, as constitutional scholar Gautam Bhatia points out, is the case of &lt;i&gt;N.M. Thomas &lt;/i&gt;which pertained to reservation in promotions. Justice Mathew argued that the test for inequality in Article 16(4) is an effects-oriented test independent of the formal motivation underlying a specific act. Justice Krishna Iyer and Mathew also articulated a grander vision wherein they saw the Equality Code as transcending the embedded individual disabilities in class driven social hierarchies. This understanding is crucial for governing data driven decision-making that impacts vulnerable communities. Any law or policy on AI-related discrimination must also include disparate impact within its definition of “discrimination” to ensure that developers think about the adverse consequences even of well-intentioned decisions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI driven assessments have been challenged on grounds of constitutional violations in other jurisdictions. In 2016, the Wisconsin Supreme Court considered the legality of using risk assessment tools such as COMPAS for sentencing criminals. It affirmed the trial court’s findings and held that using COMPAS did not violate constitutional due process standards. Eric Loomis had argued that using COMPAS infringed both his right to an individualised sentence and to accurate information as COMPAS provided data for specific groups and kept the methodology used to prepare the report a trade secret. He additionally argued that the court used unconstitutional gendered assessments as the tool used gender as one of the parameters.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Wisconsin Supreme Court disagreed with Loomis arguing that COMPAS only used publicly available data and data provided by the defendant, which apparently meant Loomis could have verified any information contained in the report. On the question of individualisation, the court argued that COMPAS provided only aggregate data for groups similarly placed to the offender. However, it went on to argue as the report was not the sole basis for a decision by the judge, a COMPAS assessment would be sufficiently individualised as courts retained the discretion and information necessary to disagree.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;By assuming that Loomis could have genuinely verified all the data collected about similarly placed groups and that judges would exercise discretion to prevent the entrenchment of inequalities through COMPAS’s decision-making patterns, the judges ignored social realities. Algorithmic decision-making systems are an extension of unequal decision-making that re-entrenches prevailing societal perceptions around identity and behaviour. An instance of discrimination cannot be looked at as a single instance but as one in a menagerie of production systems that define, modulate and regulate social existence.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The policy-making ecosystem needs, therefore, to galvanise the “transformative” vision of India’s democratic fibre and study existing systems and power structures AI could re-entrench or mitigate. For example, in the matter of bank loans there is a presumption against the credit-worthiness of those working in the informal sector. The use of aggregated decision-making may lead to more equitable outcomes given that there is concrete thought on the organisational structures making these decisions and the constitutional safeguards provided.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most case studies on algorithmic discrimination in Virgina Eubanks’ &lt;i&gt;Automating Inequality &lt;/i&gt;or Safiya Noble’s &lt;i&gt;Algorithms of Oppression&lt;/i&gt; are based on western contexts. There is an urgent need for publicly available empirical studies on pilot cases in India to understand the contours of discrimination. Primary research questions should explore three related subjects. Are specified ostensibly neutral variables being used to exclude certain communities from accessing opportunities and resources or having a disproportionate impact on their civil liberties? Is there diversity in the identities of the coders themselves? Are the training data sets used representative and diverse and, finally, what role does data driven decision-making play in furthering the battle against embedded structural hierarchies?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A key feature of AI-driven solutions is the “black box” that processes inputs and generates actionable outputs behind a veil of opacity to the human operator. Essentially, the black box denotes that aspect of the human neural decision-making function that has been delegated to the machine. A lack of transparency or understanding could lead to what Frank Pasquale terms a “Black Box Society” where algorithms define the trajectories of daily existence unless “the values and prerogatives of the encoded rules hidden within black boxes” are challenged.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Ex-&lt;i&gt;post facto&lt;/i&gt; assessment is often insufficient for arriving at genuine accountability. For example, the success of predictive policing in the US was drawn from the fact that police have indeed found more crimes in areas deemed “high risk”. But this assessment does not account for the fact that this is a product of a vicious cycle through which more crime is detected in an area simply because more policemen are deployed. Here, the National Strategy rightly identifies that simply opening up code may not deconstruct the black box as not all stakeholders impacted by AI solutions may understand the code. The constant aim should be explicability which means the human developer should be able to explain how certain factors may be used to arrive at a certain cluster of outcomes in a given set of situations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The requirement of accountability stems from the Right to Life provision under Article 21. As stated in the seven-judge bench in &lt;i&gt;Maneka Gandhi vs. Union of India&lt;/i&gt;, any procedure established by law must be seen to be “fair, just and reasonable” and not “fanciful, oppressive or arbitrary.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Right to Privacy was recognised as a fundamental right by the nine-judge bench in &lt;i&gt;K.S. Puttaswamy (Retd.) vs. Union of India&lt;/i&gt;. Mass surveillance can lead to the alteration of behavioural patterns which may in turn be used for the suppression of dissent by the State. Pulling vast tracts of data on all suspected criminals—as in facial recognition systems like PAIS—create a “presumption of criminality” that can have a chilling effect on democratic values.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore, any use, particularly by law enforcement would need to satisfy the requirements for infringing on the right to privacy: the existence of a law, necessity—a clearly defined state objective—and proportionality between the state object and the means used restricting fundamental rights the least. Along with centralised policy instruments such as the National Strategy, all initiatives taken in pursuance of India’s AI agenda must pay heed to the democratic virtues of privacy and free speech and their interlinkages.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India needs a law to regulate the impact of Artificial Intelligence and enable its development without restricting fundamental rights. However, regulation should not adopt a “one-size-fits-all” approach that views all uses with the same level of rigidity. Regulatory intervention should be based on questions around power asymmetries and the likelihood of the use case adversely affronting human dignity captured by India’s constitutional ethos.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The High Level Task Force on Artificial Intelligence (AI HLEG) set up by the European Commission in June 2018 published a report on “Ethical Guidelines for Trustworthy AI” earlier this year. They feature seven core requirements which include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. While the principles are comprehensive, this document stops short of referencing any domestic or international constitutional law that helps cement these values. The Indian Constitution can help define and concretise each of these principles and could be used as a vehicle to foster genuine social inclusion and mitigation of structural injustice through AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At the centre of the vision must be the inherent rights of the individual. The constitutional moment for data driven decision-making emerges therefore when we conceptualise a way through which AI can be utilised to preserve and improve the enforcement of rights while also ensuring that data does not become a further avenue for exploitation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;National vision transcends the boundaries of policy and to misuse Peter Drucker, “eats strategy for breakfast”. As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual, particularly the vulnerable in society. While the multiple policy instruments and the National Strategy are important cogs in the wheel, the long-term vision can only be framed by how the plethora of actors, interest groups and stakeholders engage with the notion of an AI-powered Indian society.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision'&gt;https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T13:55:59Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival">
    <title>AI for Good</title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</link>
    <description>
        &lt;b&gt;CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.&lt;/b&gt;
        &lt;p&gt;The report was edited by Elonnai Hickok.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.&lt;/p&gt;
&lt;h3&gt;Methodology&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Responses&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Analysis&lt;/h3&gt;
&lt;p&gt;Even as the responses were varied, they had a few key similarities and observations.&lt;/p&gt;
&lt;h3&gt;Participants’ Familiarity with AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Perception of AI Among Participants&lt;/h3&gt;
&lt;p class="Normal1"&gt;With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.&lt;/span&gt; &lt;a name="fr1"&gt;&lt;/a&gt; &lt;span&gt;Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. &lt;/span&gt;&lt;span&gt;By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.&lt;/span&gt;&lt;a name="fr2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Form and Visual Design of the AI Concepts&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Accessibility of the Interfaces&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.&lt;a name="fr3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Biases Based on Gender&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.&lt;a name="fr4"&gt;&lt;/a&gt; &lt;span&gt;Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.&lt;/span&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concerns around Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Choices between Principles&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;[1]. &lt;/span&gt;&lt;a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html"&gt;https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[2]. &lt;a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/"&gt;https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[3]. &lt;a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition"&gt;https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[4]. &lt;a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied"&gt;https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'&gt;https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Saumyaa Naidu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-13T05:32:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft">
    <title>Artificial Intelligence: a Full-Spectrum Regulatory Challenge [Working Draft]</title>
    <link>https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
&lt;p&gt;Today, there are certain misconceptions regarding the regulation of AI. Some corporations would like us to believe that AI is being developed and used in a regulatory vacuum. Others in civil society organisations believe that AI is a regulatory circumvention strategy deployed by corporations. As a result, these organisations call for onerous regulations targeting corporations. However, some uses of AI by corporations can be completely benign and some uses AI by the state can result in the most egregious human rights violations. Therefore policy makers need to throw every regulatory tool from their arsenal to unlock the benefits of AI and mitigate its harms.&lt;/p&gt;
&lt;p&gt;This policy brief proposes a granular, full spectrum approach to the regulation of AI depending on who is using AI, who is impacted by that use and what human rights are impacted. Everything from deregulation, to forbearance, to updated regulations, to absolute and blanket prohibitions needs to be considered depending on the specifics. This approach stands in contrast to approaches of ethics, omnibus law, homogeneous principles, and human rights, which will result in inappropriate under-regulation or over-regulation of the sector.&lt;/p&gt;
&lt;p&gt;Find a copy of the working draft &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft'&gt;https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-04T06:10:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/responsible-ai-workshop">
    <title>Responsible AI Workshop</title>
    <link>https://cis-india.org/internet-governance/news/responsible-ai-workshop</link>
    <description>
        &lt;b&gt;Sunil Abraham participated in this meeting organized by Facebook on September 17, 2019 in New Delhi. &lt;/b&gt;
        &lt;p&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/responsible-ai"&gt;Click to view the agenda&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/responsible-ai-workshop'&gt;https://cis-india.org/internet-governance/news/responsible-ai-workshop&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-20T14:50:47Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today">
    <title>Talks at National University of Juridical Sciences Today</title>
    <link>https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today</link>
    <description>
        &lt;b&gt;Arindrajit Basu delivered two lectures at the National University of Juridical Sciences on September 18, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The first one was part of a symposium being conducted by the soon to be set up Intellectual Property and Technology Law Centre. I spoke on "Conceptualising India's Digital Policy Vision" The other speaker today was  Mr. Supratim Chakraborty (Partner, Khaitan&amp;amp;Co.) Tomorrow's speakers are Prof. Mahendra Kumar Bhandan and Nikhil Narendran (Partner, Trilegal)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The past year has  seen vigorous activity on the domestic  data governance policy front in India. Across key issues including intermediary liability, data localisation and e-commerce, the government has rolled out a patchwork of regulatory policies that has resulted in battle lines being drawn by governments, industry and civil society actors both in India and across the globe. The Data Protection Bill is set to be tabled in the next session of Parliament amidst supposed disagreement among policy-makers on key provisions, including data localization. The draft e-commerce policy and Chapter 4 of the  Economic Survey refer to the concepts of ‘community data’ and ‘data as public  good’ respectively. Artifiicial Intelligence is also the new buzz word among policy-making circles and industry players alike.&lt;br /&gt;&lt;br /&gt;The implementation of each of these concepts have important implications for individual privacy, the monetisation of data by (foreign tech companies) and the harnessing of-as the e-commerce policy puts it-India’s data for India’s development. Meanwhile, at international forums such as the G20, India has partnered up with its BRICS allies to emphasize the notion of ‘data sovereignty’ or the right of each country to govern data within its jurisdiction without external interference.&lt;br /&gt;In his talk, Basu unpacked each of these policies and followed up with a discussion on what these developments meant for Indian citizens and for India’s role in the multilateral global order.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second one was on 'Constitutionalizing Artificial Intelligence' conducted by the Constitutional Law Society. Here, I drew from some preliminary findings from a paper I am working on with Elonnai and Amber.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The use of big data and algorithmic decision-making  has been touted world over as a means of augmenting human capacities, removing bureaucratic fetters and benefiting society. Yet, with concerns arising around bias, fairness and a lack of algorithmic accountability, an entirely new domain of discourse on data justice has emerged - underscoring the idea that algorithms not only have the potential to exacerbate entrenched structural inequality but could also create and modulate new forms of injustice for the vulnerable sections of society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;There is a need for a reflexive turn in the debate on data justice that adequately considers the broader narrative and entrenched inequality in the ecosystem. &lt;/span&gt;&lt;span&gt;Transformative constitutionalism is a new brand of scholarship in comparative constitutional law which celebrates the crucial role of the state and the judiciary in bringing about emancipatory change and rooting out structural inequality.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Originally conceptualized as a Global South concept designed as a counter-model to the individual rights-driven model of Northern Constitutions, scholars have now identified emancipatory provisions in several western constitutions such as Germany. India’s constitution is one such example. The origins of constitutional order in India were designed to “bring the alien and powerful machine like that of the state under the control of human will” and to eliminate the inequality of “status, facilities and opportunities.” &lt;br /&gt;&lt;br /&gt;What is the relevance of India's constitutional ethos in the regulation of modern day data driven decision-making? How can policy-makers use constitutional tenets to mitigate structural injustice and transform the bearings of 21st century Indian society?&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today'&gt;https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-20T14:45:35Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-in-healthcare">
    <title>AI in Healthcare</title>
    <link>https://cis-india.org/internet-governance/news/ai-in-healthcare</link>
    <description>
        &lt;b&gt;The Center for Information Technology and Public Policy (CITAPP) and the International Institute of Information Technology Bangalore (IIITB) invited Radhika Radhakrishnan for a talk at IIIT-Bangalore on September 13, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In her talk, she  critically questioned the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in India (including the private sector, non-profits, and the Indian State) from a feminist standpoint. Specific to healthcare in India, such a narrative has been employed towards solving development challenges (such as a shortage of medical practitioners in remote regions of the country) through the introduction of AI applications targeted towards the sick-poor. Through her research and fieldwork, she analysed the layers of expropriation and experimentation that come into play when AI technologies become a method of using 'diverse' bodies and medical records of the sick-poor as ‘data’ to train proprietary AI algorithms at a low cost in the absence of effective State regulatory mechanisms. She argued that structural challenges (such as lack of incentives for medical practitioners to join public healthcare) get reframed into opportunities to substitute labour (people) by capital (technology) through innovation of “spectacular technologies” such as AI. Throughout the talk, she also highlighted the methodologies she used to conduct this research.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-in-healthcare'&gt;https://cis-india.org/internet-governance/news/ai-in-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-19T16:15:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy">
    <title>Policies for the Platform Economy</title>
    <link>https://cis-india.org/internet-governance/news/policies-for-the-platform-economy</link>
    <description>
        &lt;b&gt;Anubha Sinha and Amber Sinha will be panelists in this event being organized by IT for Change at India Habitat  Centre in New Delhi on August 30, 2019. &lt;/b&gt;
        &lt;p&gt;The agenda for the event &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/agenda-for-policies-for-the-platform-economy"&gt;is here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/policies-for-the-platform-economy'&gt;https://cis-india.org/internet-governance/news/policies-for-the-platform-economy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-27T00:19:26Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
