<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 21 to 35.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/owasp-seasides-conference"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/big-data-in-india-benefits-harms-and-human-rights-a-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/unescap-google-ai-meeting"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond">
    <title>Fostering Strategic Convergence in US-India Tech Relations: 5G and Beyond</title>
    <link>https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond</link>
    <description>
        &lt;b&gt;The 2019 G-20 summit underscores the importance of fostering strategic convergence in U.S.-India tech relations.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Justin Sherman and Arindrajit Basu was &lt;a class="external-link" href="https://thediplomat.com/2019/07/fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond/"&gt;published in the Diplomat&lt;/a&gt; on July 3, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;As world leaders gathered for the G-20 summit in Osaka, Japan this past weekend, a multitude of issues from climate to trade to technology came to the fore. Much of the focus was on U.S.-China interactions at the summit, as the two nations are  locked in both a trade war and broader technological and geopolitical competition. Despite the present focus on the U.S. and China, however, it is crucial to not overlook another bilateral relationship of ever-growing importance in the process: The tech relationship between the United States and India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Certainly, the two countries have many disagreements on some technology issues. But this is a geopolitical relationship that is both strategically important for each country, and a vital opportunity for the two largest democracies in the world to collectively combat Chinese-style digital authoritarianism.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Huawei and 5G&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;First, with respect to national security and 5G roll-outs, the U.S and India are not on the same page. The United States, for several months now, has been on a &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;diplomatic messaging tour&lt;/a&gt; of the world to try to convince — with great resistance (some would argue failure) — allies, partners, and potential partners alike to ban Chinese firm Huawei from supplying components of 5G networks. Many officials across Europe, the Middle East, South America, and elsewhere have been reluctant to ban Huawei per the U.S. recommendation, and India is no exception. Indeed, National Security Advisory Board Chairman P.S. Raghavan &lt;a href="https://www.thehindu.com/news/national/on-5g-and-data-india-stands-with-developing-world-not-us-japan-at-g20/article28207169.ece/amp/?__twitter_impression=true" target="_blank"&gt;told&lt;/a&gt; &lt;em&gt;The Hindu&lt;/em&gt; that “5G is becoming a fault line in the technology cold war between world powers” and that India must avoid getting caught in these fault lines.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In large part, U.S. diplomatic messaging here has fallen short due to &lt;a href="https://www.lawfareblog.com/confused-us-messaging-campaign-huawei" target="_blank"&gt;heavy conflations&lt;/a&gt; of national security- and trade-related risks; and Trump only contributed further to this fact with his latest &lt;a href="https://twitter.com/JenniferJJacobs/status/1145072073800183808" target="_blank"&gt;reference&lt;/a&gt; to Huawei, during the G-20, as a potential trade war bargaining chip. The sheer population of India, however, combined with its fast growing technology sectors and &lt;a href="http://www.cmai.asia/digitalindia/" target="_blank"&gt;desire to digitize&lt;/a&gt;, makes the country an important market player when it comes to the 5G revolution. U.S.-India engagement on 5G issues must be managed effectively through robust articulation of each country’s national interests underscored by a clean segregation of trade and security questions in the discussion. This partnership has the potential to wield great influence in the global market, including in ways that could prioritize or deprioritize certain 5G equipment suppliers (like Huawei).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data Sovereignty and Data Privacy&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Data sovereignty is another hot area in which the U.S.-India tech relationship demands careful negotiation. Over the past year, the Indian government has &lt;a href="https://twitter.com/cis_india/status/1143096429298085889" target="_blank"&gt;introduced a range of policy instruments&lt;/a&gt; which dictate that certain kinds of data must be stored in servers located physically within India — termed “&lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;data localization&lt;/a&gt;.” While there are &lt;a href="https://cis-india.org/internet-governance/resources/the-localisation-gambit.pdf" target="_blank"&gt;a number of policy objectives&lt;/a&gt; this gambit ostensibly seeks to serve, the two which stand out are (1) the presently cumbersome process for Indian law enforcement agencies to access data stored in the U.S. during criminal investigations, and (2) extractive economic models used by U.S. companies operating in India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A range of conflicting developments emerging from the G-20 summit underscore this fact. India, along with the BRICS grouping, &lt;a href="https://mea.gov.in/bilateral-documents.htm?dtl/31506/Joint+Statement+on+BRICS+Leaders+Informal+Meeting+on+the+margins+of+G20+Summit" target="_blank"&gt;focused&lt;/a&gt; on the development dimensions of data governance and re-emphasized the need for &lt;a href="https://www.youtube.com/watch?v=0a8YsZQ0F6k&amp;amp;feature=youtu.be" target="_blank"&gt;data sovereignty&lt;/a&gt; — broadly understood as the sovereign right of nations to govern data in their national interest for the welfare of their citizens. President Trump &lt;a href="https://www.whitehouse.gov/briefings-statements/remarks-president-trump-g20-leaders-special-event-digital-economy-osaka-japan/" target="_blank"&gt;reigned in his focus&lt;/a&gt; on the need for cross-border data flows and, in direct opposition to some proposals that have emerged from India, explicitly opposed data localization. While India did not sign the &lt;a href="https://www.international.gc.ca/world-monde/international_relations-relations_internationales/g20/2019-06-29-g20_declaration-declaration_g20.aspx?lang=eng" target="_blank"&gt;Osaka Declaration on the Digital Economy&lt;/a&gt; that promoted cross-border data flows, the importance of cross-border data flows in spurring the global economy did find its way into the &lt;a href="https://g20.org/pdf/documents/en/FINAL_G20_Osaka_Leaders_Declaration.pdf" target="_blank"&gt;Final G-20 Leaders Declaration&lt;/a&gt; — which, of course, both countries signed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Geopolitically, the importance of India’s data governance stance cannot be overstated as it could pave the way for the approach adopted by other emerging economies — most notably the BRICS countries. Likewise, the U.S. has important thinking to do around such questions as what shape a national data privacy law could take. Even though the two countries’ views on data may be quite different from one another, the seats that India and the U.S. have at the table for &lt;a href="https://www.theatlantic.com/international/archive/2019/06/g20-data/592606/" target="_blank"&gt;global data governance discussions&lt;/a&gt; — alongside others like Japan, China, and the European Union — underscore the value of meaningful interactions and mutual trust and respect on this issue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Norms for a Democratic Digital Future&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, as the &lt;a href="https://www.un.org/disarmament/ict-security/" target="_blank"&gt;United Nations Group of Governmental Experts&lt;/a&gt; and the &lt;a href="https://www.un.org/disarmament/open-ended-working-group/" target="_blank"&gt;Open-Ended Working Group&lt;/a&gt; meet to resurrect the norm-formulation process for fostering responsible state behavior in cyberspace, India has some homework to do.  Even though it has been a member of five out of the six Group of Governmental Experts set up thus far, India is yet to come out with a public statement delineating its views on the applicability of International Law applies in cyberspace. Further, India has also failed to articulate a cohesive digital strategy — instead relying on a patchwork of hastily rolled out and often ill-conceived regulatory policies, some of which commentators in the West &lt;a href="https://www.nytimes.com/2019/02/14/technology/india-internet-censorship.html" target="_blank"&gt;have hastily labeled&lt;/a&gt; as digital authoritarianism. The U.S., for its part, amidst a &lt;a href="https://www.newamerica.org/cybersecurity-initiative/c2b/c2b-log/four-opportunities-for-states-new-cyber-bureau/" target="_blank"&gt;cutback&lt;/a&gt; to diplomatic cyber engagement (as part of cutbacks to diplomacy writ large), could also up its support of international engagement on these issues. Its recent repeal of net neutrality protections could also be argued as a step back from long-time international &lt;a href="https://d1y8sb8igg2f8e.cloudfront.net/documents/The_Idealized_Internet_vs._Internet_Realities_Version_1.0_2018-07-25_203930.pdf" target="_blank"&gt;norm promotion&lt;/a&gt; around internet openness.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through a combination of domestic policy gambits and foreign policy maneuvers, both states need to draw lines in the sand that safeguard human rights, international law, and democracy online, while arriving at some balance with each other’s national interests.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A primary example lies with artificial intelligence (AI). AI has found increasing use in digital authoritarianism, as dictators use automated, intelligent systems to boost their surveillance capabilities. The Chinese government has arguably been at the &lt;a href="https://freedomhouse.org/report/freedom-net/freedom-net-2018" target="_blank"&gt;forefront&lt;/a&gt; of this enhanced level of authoritarian rule for the digital age.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In addition to &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-government-outlines-ai-ambitions-through-2020/" target="_blank"&gt;focusing&lt;/a&gt; on AI applications for everything from natural language processing to self-driving cars — through investments, strategies, policy documents, and so on — Beijing has also been &lt;a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank"&gt;deploying&lt;/a&gt; AI in the service of large-scale human-rights abuses. Chinese strategy papers on AI, while similarly emphasizing many commercial or benign applications and raising attention to such issues as algorithmic fairness, concurrently have &lt;a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/online-symposium-chinese-thinking-ai-security-comparative-context/" target="_blank"&gt;discussed&lt;/a&gt; using AI for “social governance,” censorship, and surveillance. To combat the rising intersection of AI and digital authoritarianism, the U.S. and India could wield enormous leverage — as the two largest democracies in the world — in governing these technologies in a democratic fashion that counters &lt;a href="https://www.newamerica.org/cybersecurity-initiative/reports/essay-reframing-the-us-china-ai-arms-race/" target="_blank"&gt;dangerous arms-race narratives&lt;/a&gt; and uses of AI for surveillance and repression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The same goes for paying attention to technology exports and diffusion to human-rights abusers. For instance, companies incorporated in China, among those incorporated elsewhere, have been &lt;a href="https://www.cfr.org/blog/authoritarians-are-exporting-surveillance-tech-and-it-their-vision-internet" target="_blank"&gt;heavily involved&lt;/a&gt; in exports of dual-use surveillance technologies to other countries, including those with questionable or outright poor human-rights records. Although companies incorporated in democracies may engage in such practices as well, most democracies take steps to curtail these practices as much as possible, such as through the multilateral Wassenaar Arrangement — which lays out export controls around conventional weapons and dual-use goods and technologies. The U.S. has long been a party to this agreement, and India &lt;a href="https://economictimes.indiatimes.com/news/defence/wassenaar-arrangement-decides-to-make-india-its-member/articleshow/61975192.cms?from=mdr" target="_blank"&gt;officially joined&lt;/a&gt; in 2018. Arguments persist about the extent to which Beijing is involved in these dual-use surveillance technology exports, but these exports may only increase going forward as companies &lt;a href="https://www.newamerica.org/weekly/edition-254/long-view-digital-authoritarianism/" target="_blank"&gt;increasingly&lt;/a&gt; sell not just internet surveillance tools but also dual-use AI tools. In this way, too, India and the U.S. could play an important role in countering the spread of such capabilities to human-rights abusers and standing against the spread of digital authoritarianism in the process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The relationship here is, therefore, one that requires careful navigation for its significant geopolitical, economic, and ideological consequences. For the future of the technological relationship between the world’s largest democracies—and the extent to which they respect each other’s strategic autonomy while converging on issues of mutual interest—could determine the future of global digital governance.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond'&gt;https://cis-india.org/telecom/blog/the-diplomat-justin-sherman-and-arindrajit-basu-july-3-2019-fostering-strategic-convergence-in-us-india-tech-relations-5g-and-beyond&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Justin Sherman and Arindrajit Basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Telecom</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-05T02:19:09Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development">
    <title>The Centre for Internet and Society’s comments and recommendations to the: Report on AI Governance Guidelines Development</title>
    <link>https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development</link>
    <description>
        &lt;b&gt;The Centre for Internet &amp; Society (CIS) submitted its comments and recommendations on the Report on AI Governance Guidelines Development.&lt;/b&gt;
        
&lt;p&gt;With research assistance by Anuj Singh&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;I. Background&lt;/h2&gt;
&lt;p&gt;On 6 January 2025, a Subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group put out the Report on AI Governance Guidelines Development, which advocated for a whole-of-government approach to AI governance. This sub-committee was constituted by the Ministry of Electronics and Information Technology (MeitY) on November 9, 2023, to analyse gaps and offer recommendations for developing a comprehensive framework for governance of Artificial Intelligence (AI). As various AI governance conversations take centre stage, this is a welcome step, and we hope that there are more opportunities through public comments and consultations to improve on this important AI document. &lt;br /&gt;&lt;br /&gt;CIS’ comments are inline with the submission guidelines,&amp;nbsp; we have provided both comments and suggestions based on the headings and text provided in the report.&lt;/p&gt;
&lt;h2&gt;II. Governance of AI&lt;/h2&gt;
&lt;p&gt;The subcommittee report has explained its reasons for staying away from a definition. However, it would be helpful to set the scope of AI, at the outset of the report, given that different AI systems have different roles and functionalities. Having a clearer framework in the beginning can help readers better understand the scope of the conversation in the report. This section also states that AI can now &lt;strong&gt;“&lt;/strong&gt;perform complex tasks without active human control or&amp;nbsp; supervision”, while there are instances where AI is being used without an active human control, there is a need to emphasise on the need for humans in the loop. This has also been highlighted in the &lt;a href="https://oecd.ai/en/dashboards/ai-principles/P6"&gt;OECD AI principles &lt;/a&gt;which this report draws inspiration from.&lt;/p&gt;
&lt;h3&gt;A. AI Governance Principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;A proposed list of AI Governance principles (with their explanations) is given&amp;nbsp; below. &lt;/strong&gt;&lt;br /&gt;While referring to the OECD AI principles is a good first step in understanding the global best practices, it is suggested that an exercise in&amp;nbsp; mapping of all global AI principles documents published by international and multinationals organisations and civil society is undertaken,&amp;nbsp; to determine principles that are most important for India. The OECD AI principles also come from regions that have a better internet penetration, and higher literacy rate than India, hence for them the principle of “Digital by design governance” would be possible to be achieved but in India, a digital first approach, especially in governance, could lead to large scale exclusions.&lt;/p&gt;
&lt;h3&gt;B. Considerations to operationalise the principles&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;1. Examining AI systems using a lifecycle approach &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The sub committee has taken a novel approach to define the AI life cycle. The terms “Development, Deployment and Diffusion” have not been seen in any of the major publications about AI lifecycle. While academicians (e.g. &lt;a href="https://www.sciencedirect.com/org/science/article/pii/S1438887123002224"&gt;Chen et al. (2023&lt;/a&gt;), &lt;a href="https://www.cell.com/patterns/pdfExtended/S2666-3899(22)00074-5"&gt;De Silva and Alahakoon (2022)&lt;/a&gt;) have pointed out that the AI life cycle contains the following stages - design, development and deployment, others &lt;a href="https://www.sciencedirect.com/science/article/pii/S2666389922000745"&gt;(Ng et al. (2022)&lt;/a&gt; have defined it as “data creation, data acquisition, model development, model evaluation and model deployment. Even NASSCOM’s&amp;nbsp; &lt;a href="https://nasscom.in/ai/pdf/the-developer%27s-playbook-for-responsible-ai-in-india.pdf"&gt;Responsible AI Playbook&lt;/a&gt; follows the “conception, designing, development and deployment, as some of the key stages in the AI life cycle. Similarly the OECD also recognised “i) ‘design, data and models’ ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’.” as the phases of the AI life cycle. The subcommittee hence could provide citation as well as a justification of using this novel approach to the AI lifecycle, and state the reason for moving away from the recognised stages. Steering away from an understood approach could cause some confusion amongst different stakeholders who may not be as well versed with AI terminologies and the AI lifecycle to begin with.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Taking an ecosystem-view of AI actors &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the report rightly states that multiple actors are involved across the AI lifecycle, it is also important to note that the same actor could also be involved in multiple stages of the AI lifecycle. For example if we take the case of an AI app used for disease diagnosis. The medical professional can be the data principal (using their own data), the data provider (using the app thereby providing the data), and the end user (someone who is using the app for diagnosis). Similarly if we look at the example of a government body,&amp;nbsp; it can be the data provider, the developer (if it is made inhouse or outsourced through tenders), the deployer, as well as the end user. Hence for each AI application there might be multiple actors who play different roles and whose roles might not be static. &lt;br /&gt;&lt;br /&gt;While looking at governance approaches, the approach must ideally not be limited to responsibilities and liabilities, especially when the “data principal” and individual end users are highlighted as actors; the approach should also include rights and means of redressal in order to be a rights based people centric approach to AI governance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Leveraging technology for governance &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While the use of techno-legal approach in governance is picking up speed there is a need to look at existing Central and State capacity to undertake this, and also look at what are the ways this could affect people who still do not have access to the internet. One example of a techno legal approach that has seen some success has been the&lt;a href="https://www.techinasia.com/indian-state-running-pilot-put-land-records-blockchain"&gt; Bhumi programme&lt;/a&gt; in Andhra Pradesh that used blockchain for land records,&amp;nbsp; however this also led to the weakening of local institutions, and also led to exclusion of marginalised people &lt;a href="https://www.tandfonline.com/doi/full/10.1080/01436597.2021.2013116"&gt;Kshetri (2021)&lt;/a&gt;. It was also stated that there was a need to strengthen existing institutions before using a technological measure. &lt;br /&gt; &lt;strong&gt;&lt;br /&gt; &lt;/strong&gt;Secondly, while the sub committee has emphasized on the improvements in quality of generative AI tools, there is a need to assess how these tools work for Indian use cases. It was reported last year that ChatGPT could not answer all the questions relating to the Indian civil services exam, and failed to correctly answer questions on geography, however it was able to crack &lt;a href="https://indiaai.gov.in/news/chatgpt-fails-to-clear-the-prestigious-civil-service-examination"&gt;tough exams in the USA.&lt;/a&gt; In addition to this, a month ago the Finance Ministry has advised government officials to refrain from using generative AI tools on official devices for fear of leakage of &lt;a href="https://www.thehindu.com/sci-tech/technology/indias-finance-ministry-asks-employees-to-avoid-ai-tools-like-chatgpt-deepseek/article69183180.ece"&gt;confidential information.&lt;/a&gt; &lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thirdly, the subcommittee needs to assess India’s data preparedness for this scale of techno legal approach. In our study which was specific to healthcare and AI in India, where we surveyed medical professionals, hospitals and technology companies, a common understanding was that data quality in Indian datasets was an issue, and that there was somewhere reliance on data from the global north. This could be similar in other sectors as well, hence when this data is used to train the system it could lead to harms and biases.&lt;/p&gt;
&lt;h2&gt;III. GAP ANALYSIS&lt;/h2&gt;
&lt;h3&gt;A. The need to enable effective compliance and enforcement of existing laws.&lt;/h3&gt;
&lt;p&gt;The sub-committee has highlighted the importance of ensuring that the growth of AI does not lead to unfair trade practices and market dominance. It is hence important to analyse whether the existing laws on antitrust and competition, and the regulatory capacity of Competition Commission of India&amp;nbsp; are robust enough to deal with AI, and the change in technology and technology developers.&lt;/p&gt;
&lt;p&gt;There is also an urgent need to assess the issues that might come under the ambit of competition throughout the lifecycle of AI, including in areas of chip manufacturing, compute, data, models and IP. While the players could keep changing in this evolving area of technology there is a need to strengthen the existing regulatory system, before looking at techno legal measures.&lt;/p&gt;
&lt;p&gt;We suggest that before a techno legal approach is sought in all forms of governance, there is an urgent need to map the existing regulations both central and state and assess how they apply to regulating AI, and assess the capacity of existing regulatory bodies to regulate issues of AI. In the case of healthcare for example there are multiple laws, policies and guidelines, as well as regulatory bodies that apply to various stages of healthcare and various actors and at times these regulations do not refer to each other or cause duplications that could lead to &lt;a href="https://www.kas.de/documents/d/politikdialog-asien/panorama_2024-1-107-122"&gt;lack of clarity.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Below we are adding our comments and suggestions certain subsections in this section on &lt;strong&gt;The need to enable effective compliance and enforcement of existing laws &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;1. Intellectual property rights&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;a. Training models on copyrighted data and liability in case of&amp;nbsp; infringement&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While Section. 14 of the Indian Copyright Act, 1957 provides copyright holders with exclusive rights to copy and store works, considering the fact that training AI models involves making &lt;a href="https://spicyip.com/2019/08/should-indian-copyright-law-prevent-text-and-data-mining.html"&gt;non-expressive uses of work&lt;/a&gt;, a straightforward conclusion may not be drawn easily. Hence, the presumption that training models on copyrighted data constitutes infringement is premature and unfounded.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This report states “The Indian law permits a very closed list of activities in using copyrighted data&amp;nbsp; without permission that do not constitute an infringement. Accordingly, it is clear&amp;nbsp; that the scope of the exception under Section 52(1)(a)(i) of the Copyright Act,&amp;nbsp; 1957 is extremely narrow. Commercial research is not exempted; not-for-profit &lt;sup&gt;10&lt;/sup&gt; institutional research is not exempted. Not-for-profit research for personal or private use, not with the intention of gaining profit and which does not compete&amp;nbsp; with the existing copyrighted work is exempted. “ &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Indian copyright law follows a ‘hybrid’ model of limitations and exceptions under s.52(1). S. 52(1)(a), which is the ‘fair dealing’ provision, is more open-ended than the rest of the clauses in the section. Specifically, the Indian fair dealing provision permits fair dealing with any work (not being a computer programme) for the purposes of private or personal use, including research. &lt;br /&gt; &lt;br /&gt; If India is keen on indigenous AI development, specifically as it relates to foundation models, it should work towards developing frameworks for suitable exceptions ,as may be appropriate.&amp;nbsp; Lawmakers could distinguish between the different types of copyrighted works and public-interest purposes while considering the issue of infringement and liability&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;b. Copyrightability of work generated by using foundation models &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We suggest that a public consultation would certainly be a useful exercise in ensuring opinions and issues of all stakeholders including copyright holders, authors, and users are taken into account.&lt;/p&gt;
&lt;h3&gt;C. The need for a whole-of-government approach.&lt;/h3&gt;
&lt;p&gt;While the information existing in silos is a significant issue and roadblock, if the many guidelines and existing principles have taught us anything, it is that without specificity and direct applicability it is difficult for implementers to extrapolate principles into their development, deployment and governance mechanisms.&amp;nbsp; The committee assumes a sectoral understanding from the government on various players in highly regulated sectors such as healthcare or financial services. However, as our recent study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"&gt;AI in healthcare&lt;/a&gt; indicates, there are significant information gaps when it comes to shared understanding of what data is being used for AI development, where the AI models are being developed and what kind of partnerships are being entered into, for development and deployment of AI systems. While the report also highlights the concerns about the siloed regulatory framework, it is also important to consider how the sector specific challenges lend themselves to the cross-sectoral discussion. Consider that an AI credit scoring system in financial services is leading to exclusion errors.&lt;/p&gt;
&lt;p&gt;Additionally, consider an AI system being deployed for disease diagnosis. While both use predictive AI, the nature of risk and harm are different. While there can be common and broad frameworks to potentially test efficacy of both AI models, the exact parameters for testing them would have to be unique. Therefore, it will be important to consider where bringing together cross-sectoral stakeholders will be useful and where it may need more deep work at the sector level.&lt;/p&gt;
&lt;h2&gt;IV. Recommendations&lt;/h2&gt;
&lt;h3&gt;1. To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance.&lt;/h3&gt;
&lt;p&gt;We would like to reiterate the earlier section and highlight the&amp;nbsp; importance of considering how the sector specific challenges lend themselves to the cross-sectoral discussion. While the whole of government approach is good as it will help building a common understanding between different government institutions, this approach might not be sufficient when it comes to AI governance. It is because this is based on the implicit assumption that internal coordination among various government bodies is enough to manage AI related risks.&lt;/p&gt;
&lt;h3&gt;2.To develop a systems-level understanding of India’s AI&amp;nbsp; ecosystem, MeitY should establish, and administratively house,&amp;nbsp; a Technical Secretariat to serve as a technical advisory body&amp;nbsp; and coordination focal point for the Committee/ Group.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Subcommittee report states at this stage, it is not recommended to establish a Committee/ Group or its Secretariat as statutory authorities, as making such a decision requires significant analysis of gaps, requirements, and possible unintended outcomes. While these are valid considerations, it is necessary that there are adequate checks and balances in place. If the secretariat is placed within MeitY then safeguards must be in place to ensure that officials have autonomy in decision making.&amp;nbsp; The subcommittee suggests that MeitY can bring officials on deputation from other departments. Similarly the committee proposes bringing experts from the industry, while it is important for informed policy making,&amp;nbsp; there is also risk of &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4931927"&gt;regulatory capture&lt;/a&gt;. Setting a cap on the percentage of industry representatives and full disclosure of affiliations of experts involved are some of the safeguards which can be considered. We also suggest that members of civil society are also considered for this Secretariat.&lt;/p&gt;
&lt;h3&gt;3.To build evidence on actual risks and to inform harm mitigation,&amp;nbsp; the Technical Secretariat should establish, house, and operate&amp;nbsp; an AI incident database as a repository of problems&amp;nbsp; experienced in the real world that should guide responses to&amp;nbsp; mitigate or avoid repeated bad outcomes.&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt; &lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The report suggests that the technical secretariat will develop an actual incidence of AI-related risks in India. In most instances, an AI incident database will assume that an AI related unfavorable incident has already taken place, which then implies that it's no longer a potential risk but an actual harm. This recommendation takes a post-facto approach to assessing AI systems, as opposed to conducting risk assessments prior to the actual deployment of an AI system. Further, it also lays emphasis on receiving reports from public sector organizations deploying AI systems. Given that public sector organizations, in many cases, would be the deployers of AI systems as opposed to the developers, they may have limited know-how on functionality of tools and therefore the risks and harms.&lt;/p&gt;
&lt;p&gt;It is important to clarify and define what will be considered as an AI risk as this could also depend on stakeholders, for example losing clients due to an AI system for a company is a risk, and so is an individual&amp;nbsp; being denied health insurance because of AI bias.&amp;nbsp; With this understanding, while there is a need to keep an active assessment of risks and the emergence of new risks, the Technical&amp;nbsp; Secretariat could also undergo a mapping of the existing risks which have been highlighted by academia and civil society and international organisations and begin the risk database with that. In addition, the “AI incident database” should also be open to research institutions and civil society organisations similar to &lt;a href="https://oecd.ai/en/incidents"&gt;The OECD AI Incidents Monitor&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;4. To enhance transparency and governance across the AI&amp;nbsp; ecosystem, the Technical Secretariat should engage the&amp;nbsp; industry to drive voluntary commitments on transparency&amp;nbsp; across the overall AI ecosystem and on baseline commitments&amp;nbsp; for high capability/widely deployed systems.&lt;/h3&gt;
&lt;p&gt;It is commendable that the sub committee in this report extends the transparency requirement to the government, with the example of law enforcement. This would create more trust in the systems and also add the responsibility on the companies providing these services to be compliant with existing laws and regulations.&lt;/p&gt;
&lt;p&gt;While the transparency measures listed will ensure better understanding of processes of&amp;nbsp; AI developers and deployers, there is also a need to bring in responsibility along with transparency. While this report also mentions ‘peer review by third parties’, we would also like to suggest auditing as a mechanism to undertake transparency and responsibility. In our study on &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf"&gt;AI data supply chain &amp;amp; auditability and healthcare in India&lt;/a&gt;, (which surveyed 150 medical professionals, 175 respondents from healthcare institutions and 175 respondents from technology companies); revealed that 77 percent of healthcare institutions and 64 percent of the technology companies surveyed for this study, conducted audits or evaluations of the privacy and security measures for data.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cis-india.org/home-images/AIGovernanceComments.png" alt="null" class="image-inline" title="AI Governance Comments" /&gt;&lt;/p&gt;
&lt;div class="visualClear"&gt;Source: CIS survey of professionals in AI and healthcare, January- April 2024. Medical professionals (n = 133); healthcare institutions (n = 162); technology companies (n = 171)&lt;/div&gt;
&lt;div class="visualClear"&gt;&amp;nbsp;&lt;/div&gt;
&lt;h3&gt;5. Form a sub-group to work with MEITY to suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business.&lt;/h3&gt;
&lt;p&gt;It would be necessary to provide some clarity on where the process to the Digital India Act is currently. While there were public consultations in 2023, we have not heard about the progress in the development of the Act. The most recent discussion on the Act was in January 2025, where S Krishnan, Secretary, Ministry of Electronics and IT (MeitY), &lt;a href="https://www.financialexpress.com/life/technology-will-not-rush-in-bringing-digital-india-act-meity-secretary-3708673/"&gt;stated&lt;/a&gt; that they were in no hurry to carry forward the draft Digital India Act and regulatory framework around AI. He also stated that the existing legal frameworks were currently sufficient to handle AI intermediaries. &lt;br /&gt; &lt;br /&gt; We would also like to highlight that during the consultations on the DIA it was proposed to replace the &lt;a href="https://vidhilegalpolicy.in/blog/explained-the-digital-india-act-2023/"&gt;Information Technology Act 2000. &lt;/a&gt;It is necessary that the subcommittee give clarity on this, since if the DIA is enacted, this reports section III on GAP analysis especially around the IT Act, and Cyber Security will need to be revisited.&lt;/p&gt;
&lt;h2&gt;&lt;/h2&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development'&gt;https://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-report-on-ai-governance-guidelines-development&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas, Amrita Sengupta and Anubha Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2025-03-06T06:32:45Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency">
    <title>Towards Algorithmic Transparency</title>
    <link>https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency</link>
    <description>
        &lt;b&gt;This policy brief examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This brief proposes a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggests solutions such as the incorporation of audits, and ex ante approaches to building interpretable models right from the design stage. Read the full report &lt;a href="https://cis-india.org/internet-governance/algorithmic-transparency-pdf" class="internal-link" title="Algorithmic Transparency PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The Regulatory Practices Lab at CIS aims to produce regulatory policy 
suggestions focused on India, but with global application, in an agile 
and targeted manner and to promote transparency around practices 
affecting digital rights. &lt;br /&gt;The Regulatory Practices Lab is supported by Google and Facebook.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency'&gt;https://cis-india.org/internet-governance/blog/towards-algorithmic-transparency&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Radhika Radhakrishnan, and Amber Sinha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Algorithms</dc:subject>
    
    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Transparency</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-07-15T13:16:44Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/owasp-seasides-conference">
    <title>OWASP Seasides Conference</title>
    <link>https://cis-india.org/internet-governance/news/owasp-seasides-conference</link>
    <description>
        &lt;b&gt;Karan Saini attended the OWASP Seasides security conference held on February 27 and 28, 2019 at Cavelossim, Goa. The event was organized by OWASP Seasides.&lt;/b&gt;
        &lt;p&gt;For conference details &lt;a class="external-link" href="https://www.owaspseasides.com/schedule/workshops"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/owasp-seasides-conference'&gt;https://cis-india.org/internet-governance/news/owasp-seasides-conference&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-03-07T23:53:47Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india">
    <title> AI for Healthcare: Understanding Data Supply Chain and Auditability in India </title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india</link>
    <description>
        &lt;b&gt;This report aims to understand the prevalence and use of AI auditing practices in the healthcare sector. By mapping the data supply chain underlying AI technologies, the study aims to unpack i) how AI systems are developed and deployed to achieve healthcare outcomes and, ii) how AI audits are perceived and implemented by key stakeholders in the healthcare ecosystem. &lt;/b&gt;
        
&lt;p dir="ltr"&gt;Read our full report &lt;a href="https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf" class="internal-link" title="AI for Healthcare: Understanding Data Supply Chain and Auditability in India PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p dir="ltr"&gt;The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.&lt;/p&gt;
&lt;p dir="ltr"&gt;For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.&lt;/p&gt;
&lt;p id="docs-internal-guid-874e64d9-7fff-d16c-ed57-d245c7214bec" dir="ltr"&gt;Our primary research questions are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What auditing practices, if any, are being followed by technology companies and healthcare institutions?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What best practices can organisations based in India adopt to improve AI auditability?&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p id="docs-internal-guid-28d92dc2-7fff-c54b-addb-63beee845252" dir="ltr"&gt;This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance&amp;nbsp; on datasets from the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Improve data management across the AI data supply chain&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt standardised data-sharing policies&lt;/em&gt;. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare. &lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Emphasise not just data quantity but also data quality&lt;/em&gt;. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Streamline AI auditing as a form of governance&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Standardise the practice of AI auditing&lt;/em&gt;. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Build organisational knowledge and inter-stakeholder collaboration&lt;/em&gt;. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Prioritise transparency and public accountability in auditing standards&lt;/em&gt;. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Centre public good in India’s AI industrial policy&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt focused and transparent approaches to investing in and financing AI projects&lt;/em&gt;. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Strengthen regulatory checks and balances for AI governance.&lt;/em&gt;&lt;br /&gt;While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india'&gt;https://cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta (PI), Shweta Mohandas (Co-PI), (In alphabetical order) Abhineet Nayyar, Chetna VM, Puthiya Purayil Sneha, Yatharth</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Health Tech</dc:subject>
    
    
        <dc:subject>RAW Publications</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2024-11-30T08:17:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/big-data-in-india-benefits-harms-and-human-rights-a-report">
    <title>Big Data in India: Benefits, Harms, and Human Rights - Workshop Report</title>
    <link>https://cis-india.org/internet-governance/big-data-in-india-benefits-harms-and-human-rights-a-report</link>
    <description>
        &lt;b&gt;The Centre for Internet and Society held a one-day workshop on “Big Data in India: Benefits, Harms and Human Rights” at India Habitat Centre, New Delhi on the 1st of October, 2016.  This report is a compilation of the the issues discussed, ideas exchanged and challenges recognized during the workshop. The objective of the workshop was to discuss aspects of big data technologies in terms of harms, opportunities and human rights. The discussion was designed around an extensive study of current and potential future uses of big data for governance in India, that CIS has undertaken over the last year with support from the MacArthur Foundation.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Contents&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#1"&gt;&lt;strong&gt;Big Data: Definitions and Global South Perspectives&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#2"&gt;&lt;strong&gt;Aadhaar as Big Data&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#3"&gt;&lt;strong&gt;Seeding&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#4"&gt;&lt;strong&gt;Aadhaar and Data Security&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#5"&gt;&lt;strong&gt;Aadhaar’s Relational Arrangement with Big Data Scheme&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#6"&gt;&lt;strong&gt;The Myths surrounding Aadhaar&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#7"&gt;&lt;strong&gt;IndiaStack and FinTech Apps&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="#8"&gt;&lt;strong&gt;Problems with UID&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id="1"&gt;Big Data: Definitions and Global South Perspectives&lt;/h2&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;“Big Data” has been defined by multiple scholars till date. The first consideration at the workshop was to discuss various definitions of big data, and also to understand what could be considered Big Data in terms of governance, especially in the absence of academic consensus. One of the most basic ways to define it, as given by the National Institute of Standards and Technology, USA, is to take it to be the data that is beyond the computational capacity of current systems. This definition has been accepted by the UIDAI of India. Another participant pointed out that Big Data is not only indicative of size, but rather the nature of data which is unstructured, and continuously flowing. The Gartner definition of Big Data relies on the three Vs i.e. Volume (size), Velocity (infinite number of ways in which data is being continuously collected) and Variety (the number of ways in which data can be collected in rows and columns).&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The presentation also looked at ways in which Big Data is different from traditional data. It was pointed out that it can accommodate diverse unstructured datasets, and it is ‘relational’ i.e. it needs the presence of common field(s) across datasets which allows these fields to be conjoined. For e.g., the UID in India is being linked to many different datasets, and they don’t constitute Big Data separately, but do so together. An increasingly popular definition is to define data as “Big Data” based on what can be achieved through it. It has been described by authors as the ability to harness new kinds of insight which can inform decision making. It was pointed out that CIS does not subscribe to any particular definition, and is still in the process of coming up with a comprehensive definition of Big Data.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Further, discussion touched upon the approach to Big Data in the Global South. It was pointed out that most discussions about Big Data in the Global South are about the kind of value that it can have, the ways in which it can change our society. The Global North, on the other hand, &amp;nbsp;has moved on to discussing the ethics and privacy issues associated with Big Data.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;After this, the presentation focussed on case studies surrounding key Central Government initiatives and projects like Aadhaar, Predictive Policing, and Financial Technology (FinTech).&lt;/p&gt;
&lt;h2 id="2"&gt;Aadhaar as Big Data&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;In presenting CIS’ case study on Aadhaar, it was pointed out that initially, Aadhaar, with its enrollment dataset was by itself being seen as Big Data. However, upon careful consideration in light of definitions discussed above, it can be seen as something that enables Big Data. The different e-governance projects within Digital India, along with Aadhaar, constitute Big Data. The case study discussed the Big Data implications of Aadhaar, and in particular looked at a ‘cradle to grave’ identity mapping through various e-government projects and the datafication of various transaction generated data.&lt;/p&gt;
&lt;h2 id="3"&gt;Seeding&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Any digital identity like Aadhaar typically has three features: 1. Identification i.e. a number or card used to identify yourself; 2. Authentication, which is based on your number or card and any other digital attributes that you might have; 3. Authorisation: As bearers of the digital identity, we can authorise the service providers to take some steps on our behalf. The case study discussed ‘seeding’ which enables the Big Data aspects of Digital India. In the process of seeding, different government databases can be seeded with the UID number using a platform called Ginger. Due to this, other databases can be connected to UIDAI, and through it, data from other databases can be queried by using your Aadhaar identity itself. This is an example of relationality, where fractured data is being brought together. At the moment, it is not clear whether this access by UIDAI means that an actual physical copy of such data from various sources will be transferred to UIDAI’s servers or if they will &amp;nbsp;just access it through internet, but the data remains on the host government agency’s server. An example of even private parties becoming a part of this infrastructure was raised by a participant when it was pointed out that Reliance Jio is now asking for fingerprints. This can then be connected to the relational infrastructure being created by UIDAI. The discussion then focused on how such a structure will function, where it was mentioned that as of now, it cannot be said with certainty that UIDAI will be the agency managing this relational infrastructure in the long run, even though it is the one building it.&lt;/p&gt;
&lt;h2 id="4"&gt;Aadhaar and Data Security&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;This case study also dealt with the sheer lack of data protection legislation in India except for S.43A of the IT Act. The section does not provide adequate protection as the constitutionality of the rules and regulations under S.43A is ambivalent. More importantly, it only refers to private bodies. Hence, any seeding which is being done by the government is outside the scope of data protection legislation. Thus, at the moment, no legal framework covers the processes and the structures being used for datasets. Due to the inapplicability of S.43A to public bodies, questions were raised as to the existence of a comprehensive data protection policy for government institutions. Participants answered the question in the negative. They pointed out that if any government department starts collecting data, they develop their own privacy policy. There are no set guidelines for such policies and they do not address concerns related to consent, data minimisation and purpose limitation at all. Questions were also raised about the access and control over Big Data with government institutions. A tentative answer from a participant was that such data will remain under the control of &amp;nbsp;the domain specific government ministry or department, for e.g. MNREGA data with the Ministry of Rural Development, because the focus is not on data centralisation but rather on data linking. As long as such fractured data is linked and there is an agency that is responsible to link them, this data can be brought together. Such data is primarily for government agencies. But the government is opening up certain aspects of the data present with it for public consumption for research and entrepreneurial purposes.The UIDAI provides you access to your own data after paying a minimal fee. The procedure for such access is still developing.&lt;/p&gt;
&lt;h2 id="5"&gt;Aadhaar’s Relational Arrangement with Big Data Scheme&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The various Digital India schemes brought in by the government were elucidated during the workshop. It was pointed out that these schemes extend to myriad aspects of a citizen’s daily life and cover all the essential public services like health, education etc. This makes Aadhaar imperative even though the Supreme Court has observed that it is not mandatory for every citizen to have a unique identity number. The benefits of such identity mapping and the ecosystem being generated by it was also enumerated during the discourse. But the complete absence of any data ethics or data confidentiality principles make us unaware of the costs at which these benefits are being conferred on us. Apart from surveillance concerns, the knowledge gap being created between the citizens and the government was also flagged. Three main benefits touted to be provided by Aadhaar were then analysed. The first is the efficient delivery of services. This appears to be an overblown claim as the Aadhaar specific digitisation and automation does not affect the way in which employment will be provided to citizens through MNREGA or how wage payment delays will be overcome. These are administrative problems that Aadhaar and associated technologies cannot solve. The second is convenience to the citizens. The fallacies in this assertion were also brought out and identified. Before the Aadhaar scheme was rolled in, ration cards were issued based on certain exclusion and inclusion criteria.. The exclusion and inclusion criteria remain the same while another hurdle in the form of Aadhaar has been created. As India is still lacking in supporting infrastructure such as electricity, server connectivity among other things, Aadhaar is acting as a barrier rather than making it convenient for citizens to enroll in such schemes.The third benefit is fraud management. Here, a participant pointed out that this benefit was due to digitisation in the form of GPS chips in food delivery trucks and electronic payment and not the relational nature of Aadhaar. Aadhaar is only concerned with the linking up or relational part. About deduplication, it was pointed out how various government agencies have tackled it quite successfully by using technology different from biometrics which is unreliable at the best of times.&lt;/p&gt;
&lt;h2 id="6"&gt;The Myths surrounding Aadhaar&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The discussion also reflected on the fact that &amp;nbsp;Aadhaar is often considered to be a panacea that subsumes all kinds of technologies to tackle leakages. However, this does not take into account the fact that leakages happen in many ways. A system should have been built to tackle those specific kinds of leakages, but the focus is solely on Aadhaar as the cure for all. Notably, participants &amp;nbsp;who have been a part of the government pointed out how this myth is misleading and should instead be seen as the first step towards a more digitally enhanced country which is combining different technologies through one medium.&lt;/p&gt;
&lt;h2 id="7"&gt;IndiaStack and FinTech Apps&lt;/h2&gt;
&lt;h3 id="71"&gt;What is India Stack?&lt;/h3&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The focus then shifted to another extremely important Big Data project, India Stack, being conceptualised and developed &amp;nbsp;by a team of private developers called iStack, for the NPCI. It builds on the UID project, Jan Dhan Yojana and mobile services trinity to propagate and develop a cashless, presence-less, paperless and granular consent layer based on UID infrastructure to digitise India.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;A participant pointed out that the idea of India Stack is to use UID as a platform and keep stacking things on it, such that more and more applications are developed. This in turn will help us to move from being a ‘data poor’ country to a ‘data rich’ one. The economic benefits of this data though as evidenced from the TAGUP report - a report about the creation of National Information Utilities to manage the data that is present with the government - is for the corporations and not the common man. The TAGUP report openly talks about privatisation of data.&lt;/p&gt;
&lt;h3 id="72"&gt;Problems with India Stack&lt;/h3&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The granular consent layer of India Stack hasn’t been developed yet but they have proposed to base it on MIT Media Lab’s OpenPDS system. The idea being that, on the basis of the choices made by the concerned person, access to a person’s personal information may be granted to an agency like a bank. What is more revolutionary is that India Stack might even revoke this access if the concerned person expresses a wish to do so or the surrounding circumstances signal to India Stack that it will be prudent to do so. It should be pointed out that the the technology required for OpenPDS is extremely complex and is not available in India. Moreover, it’s not clear how this system would work. Apart from this, even the paperless layer has its faults and has been criticised by many since its inception, because an actual government signed and stamped paper has been the basis of a claim.. In the paperless system, you are provided a Digilocker in which all your papers are stored electronically, on the basis of your UID number. However, it was brought to light that this doesn’t take into account those who either do not want a Digilocker or UID number or cases where they do not have access to their digital records. How in such cases will people make claims?&lt;/p&gt;
&lt;h3 id="73"&gt;A Digital Post-Dated Cheque: It’s Ramifications&lt;/h3&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;A key change that FinTech apps and the surrounding ecosystem want to make is to create a digital post-dated cheque so as to allow individuals to get loans from their mobiles especially in remote areas. This will potentially cut out the need to construct new banks, thus reducing the capital expenditure , while at the same time allowing the credit services to grow. The direct transfer of money between UID numbers without the involvement of banks is a step to further help this ecosystem grow. Once an individual consents to such a system, however, automatic transfer of money from one’s bank accounts will be affected, regardless of the reason for payment. This is different from auto debt deductions done by banks presently, as in the present system banks have other forms of collateral as well. The automatic deduction now is only affected if these other forms are defaulted upon. There is no knowledge as to whether this consent will be reversible or irreversible. As Jan Dhan Yojana accounts are zero balance accounts, the account holder will be bled dry. The implication of schemes such as “Loan in under 8 minutes” were also discussed. The advantage of such schemes is that transaction costs are reduced.The financial institution can thus grant loans for the minimum amount without any additional enquiries. It was pointed out that this new system is based on living on future income much like the US housing bubble crash. Interestingly, in Public Distribution Systems, biometrics are insisted upon even though it disrupts the system. This can be seen as a part of the larger infrastructure to ensure that digital post-dated cheques become a success.&lt;/p&gt;
&lt;h3 id="74"&gt;The Role of FinTech Apps&lt;/h3&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;FinTech ‘apps’ are being presented with the aim of propagating financial inclusion. The Technology Advisory Group for Unique Projects report stated that as managing such information sources is a big task, just like electricity utilities, a National Information Utilities (NIU) should be set up for data sources. These NIUs as per the report will follow a fee based model where they will be charging for their services for government schemes. The report identified two key NIUs namely the National Payments Corporation of India (NPCI) and the Goods and Services Tax Network (GSTN). The key usage that FinTech applications will serve is credit scoring. The traditional credit scoring data sources only comprised a thin file of records for an individual, but the data that FinTech apps collect - &amp;nbsp;a person’s UID number, mobile number. and bank account number all linked up, allow for a far &amp;nbsp;more comprehensive credit rating. Government departments are willing to share this data with FinTech apps as they are getting analysis in return. Thus, by using UID and the varied data sources that have been linked together by UID, a ‘thick file’ is now being created by FinTech apps. Banking apps have not yet gone down the route of FinTech apps to utilise Big Data for credit scoring purposes.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt; &amp;nbsp;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The two main problems with such apps is that there is no uniform way of credit scoring. This distorts the rate at which a person has to pay interest. The consent layer adds another layer of complication as refusal to share mobile data with a FinTech app may lead to the app declaring one to be a risky investment thus, subjecting that individual to a &amp;nbsp;higher rate of interest .&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;h3 id="75"&gt;Regulation of FinTech Apps and the UID Infrastructure&lt;/h3&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt; India Stack and the applications that are being built on it, generate a lot of transaction metadata that is very intimate in nature. The privacy aspects of the UID legislation doesn't cover such data. The granular consent layer which has been touted to cover this still has to come into existence. Also, Big Data is based on sharing and linking of data. Here, privacy concerns and Big Data objectives clash. Big Data by its very nature challenges privacy principles like data minimisation and purpose limitation.The need for regulation to cover the various new apps and infrastructure which are being developed was pointed out.&lt;/p&gt;
&lt;h2 id="8"&gt;Problems with UID&lt;/h2&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;It has been observed that any problem present with Aadhaar is usually labelled as a teething problem, it’s claimed that it will be solved in the next 10 years. But, this begs the question - why is the system online right now?&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Aadhaar is essentially a new data condition and a new exclusion or inclusion criteria. Data exclusion modalities as observed in Rajasthan after the introduction of biometric Point of Service (POS) machines at ration shops was found to be 45% of the population availing PDS services. This number also includes those who were excluded from the database by being included in the wrong dataset. There is no information present to tell us how many actual duplicates and how many genuine ration card holders were weeded out/excluded by POS.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;It was also mentioned that any attempt to question Aadhaar is considered to be an attempt to go back to the manual system and this binary thinking needs to change. Big Data has the potential to benefit people, as has been evidenced by the scholarship and pension portals. However, Big Data’s problems arise in systems like PDS, where there is centralised exclusion at the level of the cloud. Moreover, the quantity problem present in the PDS and MNREGA systems persists. There is still the possibility of getting lesser grains and salary even with analysis of biometrics, hence proving that there are better technologies to tackle these problems. Presently, the accountability mechanisms are being weakened as the poor don’t know where to go to for redressal. Moreover, the mechanisms to check whether the people excluded are duplicates or not is not there. At the time of UID enrollment, out of 90 crores, 9 crore were rejected. There was no feedback or follow-up mechanism to figure out why are people being rejected. It was just assumed that they might have been duplicates.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Another problem is the rolling out of software without checking for inefficiencies or problems at a beta testing phase. The control of developers over this software, is so massive that it can be changed so easily without any accountability.. The decision making components of the software are all proprietary like in the the de-duplication algorithm being used by the UIDAI. Thus, this leads to a loss of accountability because the system itself is in flux, none of it is present in public domain and there are no means to analyse it in a transparent fashion..&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;These schemes are also being pushed through due to database politics. On a field study of NPR of citizens, another Big Data scheme, it was found that you are assumed to be an alien if you did not have the documents to prove that you are a citizen. Hence, unless you fulfill certain conditions of a database, you are excluded and are not eligible for the benefits that being on the database afford you.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Why is the private sector pushing for UIDAI and the surrounding ecosystem?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Financial institutions stand to gain from encouraging the UID as it encourages the credit culture and reduces transaction costs.. Another advantage for the private sector is perhaps the more obvious one, that is allows for efficient marketing of products and services..&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The above mentioned fears and challenges were actually observed on the ground and the same was shown through the medium of a case study in West Bengal on the smart meters being installed there by the state electricity utility. While the data coming in from these smart meters is being used to ensure that a more efficient system is developed,it is also being used as a surrogate for income mapping on the basis of electricity bills being paid. This helps companies profile neighbourhoods. The technical officer who first receives that data has complete control over it and he can easily misuse the data. This case study again shows that instruments like Aadhaar and India Stack are limited in their application and aren’t the panacea that they are portrayed to be.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;A participant &amp;nbsp;pointed out that in the light of the above discussions, the aim appears to be to get all kinds of data, through any source, and once you have gotten the UID, you link all of this data to the UID number, and then use it in all the corporate schemes that are being started. Most of the problems associated with Big Data are being described as teething problems. The India Stack and FinTech scheme is coming in when we already know about the problems being faced by UID. The same problems will be faced by India Stack as well.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Can you opt out of the Aadhaar system and the surrounding ecosystem?&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The discussion then turned towards whether there can be voluntary opting out from Aadhaar. It was pointed out that the government has stated that you cannot opt out of Aadhaar. Further, the privacy principles in the UIDAI bill are ambiguously worded where individuals &amp;nbsp;only have recourse for basic things like correction of your personal information. The enforcement mechanism present in the UIDAI Act is also severely deficient. There is no notification procedure if a data breach occurs. . The appellate body ‘Cyber Appellate Tribunal’ has not been set up in three years.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;CCTNS: Big Data and its Predictive Uses&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;What is Predictive Policing?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The next big Big Data case study was on the &amp;nbsp;Crime and Criminal Tracking Network &amp;amp; Systems (CCTNS). Originally it was supposed to be a digitisation and interconnection scheme where police records would be digitised and police stations across the length and breadth of the country would be interconnected. But, in the last few years some police departments of states like Chandigarh, Delhi and Jharkhand have mooted the idea of moving on to predictive policing techniques. It envisages the use of existing statistical and actuarial techniques along with many other tropes of data to do so. It works in four ways: 1. By predicting the place and time where crimes might occur; 2. To predict potential future offenders; 3. To create profiles of past crimes in order to predict future crimes; 4. Predicting groups of individuals who are likely to be victims of future crimes.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;How is Predictive Policing done?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;To achieve this, the following process is followed: 1. Data collection from various sources which includes structured data like FIRs and unstructured data like call detail records, neighbourhood data, crime seasonal patterns etc. 2. Analysis by using theories like the near repeat theory, regression models on the basis of risk factors etc. 3. Intervention&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Flaws in Predictive Policing and questions of bias&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;An obvious weak point in the system is that if the initial data going into the system is wrong or biased, the analysis will also be wrong. Efforts are being made to detect such biases. An important way to do so will be by building data collection practices into the system that protect its accuracy. The historical data being entered into the system is carrying on the prejudices inherited from the British Raj and biases based on religion, caste, socio-economic background etc.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;One participant brought about the issue of data digitization in police stations, and the impact of this haphazard, unreliable data on a Big Data system. This coupled with paucity of data is bound to lead to arbitrary results. An effective example was that of black neighbourhoods in the USA. These are considered problematic and thus they are policed more, leading to a higher crime rate as they are arrested for doing things that white people in an affluent neighbourhood get away with. This in turn further perpetuates the crime rate and it becomes a self-fulfilling prophecy. In India, such a phenomenon might easily develop in the case of migrants, de-notified tribes, Muslims etc. &amp;nbsp;A counter-view on bias and discrimination was offered here. One participant pointed out that problems with haphazard or poor quality of data is not a colossal issue as private companies are willing to fill this void and are actually doing so in exchange for access to this raw data. It was also pointed out how bias by itself is being used as an all encompassing term. There are multiplicities of biases and while analysing the data, care should be taken to keep it in mind that one person’s bias and analysis might and usually does differ from another. Even after a computer has analysed the data, the data still falls into human hands for implementation.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The issue of such databases being used to target particular communities on the basis of religion, race, caste, ethnicity among other parameters was raised. Questions about control and analysis of data were also discussed, i.e. whether it will be top-down with data analysis being done in state capitals or will this analysis be done at village and thana levels as well too. It was discussed as topointed out how this could play a major role in the success and possible persecutory treatment of citizens, as the policemen at both these levels will have different perceptions of what the data is saying. . It was further pointed out, that at the moment, there’s no clarity on the mode of implementation of Big Data policing systems. Police in the USA have been seen to rely on Big Data so much that they have been seen to become ‘data myopic’. For those who are on the bad side of Big Data, in the Indian context, laws like preventive detention can be heavily misused.There’s a very high chance that predictive policing due to the inherent biases in the system and the prejudices and inefficiency of the legal system will further suppress the already targeted sections of the society. A counterpoint was raised and it was suggested that contrary to our fears, CCTNS might lead to changes in our understanding and help us to overcome longstanding biases.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Open Knowledge Architecture as a solution to Big Data biases?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The conference then mulled over the use of ‘Open Knowledge’ architecture to see whether it can provide the solution to rid Big Data of its biases and inaccuracies if enough eyes are there. It was pointed out that Open Knowledge itself can’t provide foolproof protection against these biases as the people who make up the eyes themselves are predominantly male belonging to the affluent sections of the society and they themselves suffer from these biases.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Who exactly is Big Data supposed to serve?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The discussion also looked at questions such as who is this data for? Janata Information System (JIS), is a concept developed by MKSS &amp;nbsp;where the data collected and generated by the government is taken to be for the common citizens. For e.g. MNREGA data should be used to serve the purposes of the labourers. The raw data as is available at the moment, usually cannot be used by the common man as it is so vast and full of information that is not useful for them at all. It was pointed out that while using Big Data for policy planning purposes, the actual string of information that turned out to be needed was very little but the task of unravelling this data for civil society purposes is humongous. By presenting the data in the right manner, the individual can be empowered. The importance of data presentation was also flagged. It was agreed upon that the content of the data should be for the labourer and not a MNC, as the MNC has the capability to utilise the raw data on it’s own regardless.&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Concerns about Big Data usage&lt;/p&gt;
&lt;ol&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Participants pointed out that &amp;nbsp;privacy concerns are usually brushed under the table due to a belief that the law is sufficient or that the privacy battle has already been lost. &amp;nbsp;&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;In the absence of knowledge of domain and context, Big Data analysis is quite limited. Big Data’s accuracy and potential to solve problems needs to be factually backed.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The narrative of Big Data often rests on the assumption that descriptive statistics take over inferential statistics, thus eliminating the need for domain specific knowledge. It is claimed that the data is so big that it will describe everything that we need to know.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Big Data is creating a shift from a deductive model of scientific rigour to an inductive one. In response to this, a participant offered the idea that troves of good data allow us to make informed questions on the basis of which the deductive model will be formed. A hybrid approach combining both deductive and inductive might serve us best.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The need to collect the right data in the correct format, in the right place was also expressed.&lt;/p&gt;
&lt;/li&gt;&lt;/ol&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Potential Research Questions &amp;amp; Participants’ Areas of Research&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Following this discussion, participants brainstormed to come up with potential areas of research and research questions. They have been captured below:&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Big Data, Aadhaar and India Stack:&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;ol&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Has Aadhaar been able to tackle illegal ways of claiming services or are local negotiations and other methods still prevalent?&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Is the consent layer of India Stack being developed in a way that provides an opportunity to the UID user to give informed consent? The OpenPDS and its counterpart in the EU i.e. the My Data Structure were designed for countries with strong privacy laws. Importantly, they were meant for information shared on social media and not for an individual’s health or credit history. India is using it in a completely different sphere without strong data protection laws. What were the granular consent layer structures present in the West designed for and what were they supposed to protect?&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The question of ownership of data needs to be studied especially in context of &amp;nbsp;a globalised world where MNCs are collecting copious amounts of data of Indian citizens. What is the interaction of private parties in this regard?&lt;/p&gt;
&lt;/li&gt;&lt;/ol&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Big Data and Predictive Policing:&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;ol&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;How are inequalities being created through the Big Data systems? Lessons should be taken from the Western experience with the advent of predictive policing and other big data techniques - they tend to lead to perpetuation of the current biases which are already ingrained in the system.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;It was also pointed out how while studying these topics and anything related to technology generally, we become aware of a divide that is present between the computational sciences and social sciences. This divide needs to be erased if Big Data or any kind of data is to be used efficiently. There should be a cross-pollination between different groups of academics. An example of this can be seen to be the ‘computational social sciences departments’ that have been coming up in the last 3-4 years.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Why are so many interim promises made by Big Data failing? A study of this phenomenon needs to be done from a social science perspective. This will allow one to look at it from a different angle.&lt;/p&gt;
&lt;/li&gt;&lt;/ol&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Studying Big Data:&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;ol&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;What is the historical context of the terms of reference being used for Big Data? The current Big Data debate in India is based on parameters set by the West. For better understanding of Big Data, it was suggested that P.C. Mahalanobis’ experience while conducting the Indian census, (which was the Big Data of that time) can be looked at to get a historical perspective on Big Data. This comparison might allow us to discover questions that are important in the Indian context. It was also suggested that rather than using ‘Big Data’ as a catchphrase &amp;nbsp;to describe these new technological innovations, we need to be more discerning.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;What are the ideological aspects that must be considered while studying Big Data? What does the dialectical promise of technology mean? It was contended that every time there is a shift in technology, the zeitgeist of that period is extremely excited and there are claims that it will solve everything. There’s a need to study this dialectical promise and the social promise surrounding it.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Apart from the legitimate fears that Big Data might lead to exclusion, what are the possibilities in which it improve inclusion too?&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;The diminishing barrier between the public and private self, which is a tangent to the larger public-private debate was mentioned.&lt;/p&gt;
&lt;/li&gt;&lt;li style="list-style-type: decimal;" dir="ltr"&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;How does one distinguish between technology failure and process failure while studying Big Data? &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/p&gt;
&lt;/li&gt;&lt;/ol&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Big Data: A Friend?&lt;/p&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;In the concluding session, the fact that the Big Data moment cannot be wished away was acknowledged. The use of analytics and predictive modelling by the private sector is now commonplace and India has made a move towards a database state through UID and Digital India. The need for a nuanced debate, that does away with the false equivalence of being either a Big Data enthusiast or a luddite is crucial.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;A participant offered two approaches to solving a Big Data problem. The first was the Big Data due process framework which states that if a decision has been taken that impacts the rights of a citizen, it needs to be cross examined. The efficacy and practicality of such an approach is still not clear. The second, slightly paternalistic in nature, was the approach where Big Data problems would be solved at the data science level itself. This is much like the affirmative algorithmic approach which says that if in a particular dataset, the data for the minority community is not available then it should be artificially introduced in the dataset. It was also &amp;nbsp;suggested that carefully calibrated free market competition can be used to regulate Big Data. For e.g. a private personal wallet company that charges higher, but does not share your data at all can be an example of such competition. &amp;nbsp;&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;Another important observation was the need to understand Big Data in a Global South context and account for unique challenges that arise. While the convenience of Big Data is promising, its actual manifestation depends on externalities like connectivity, accurate and adequate data etc that must be studied in the Global South.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p style="text-align: justify;" dir="ltr"&gt;While the promises of Big Data are encouraging, it is also important to examine its impacts and its interaction with people's rights. Regulatory solutions to mitigate the harms of big data while also reaping its benefits need to evolve.&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;
&lt;p&gt;&lt;span id="docs-internal-guid-90fa226f-6157-27d9-30cd-050bdc280875"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;div style="text-align: justify;" dir="ltr"&gt;&amp;nbsp;&lt;/div&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/big-data-in-india-benefits-harms-and-human-rights-a-report'&gt;https://cis-india.org/internet-governance/big-data-in-india-benefits-harms-and-human-rights-a-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Vidushi Marda, Akash Deep Singh and Geethanjali Jujjavarapu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Human Rights</dc:subject>
    
    
        <dc:subject>UID</dc:subject>
    
    
        <dc:subject>Big Data</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Machine Learning</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Digital India</dc:subject>
    
    
        <dc:subject>Aadhaar</dc:subject>
    
    
        <dc:subject>Information Technology</dc:subject>
    
    
        <dc:subject>E-Governance</dc:subject>
    

   <dc:date>2016-11-18T12:58:19Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific">
    <title>‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific</title>
    <link>https://cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific</link>
    <description>
        &lt;b&gt;Researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific.&lt;/b&gt;
        &lt;p&gt;This is a modified version of the post that appeared in&lt;strong&gt; &lt;/strong&gt;&lt;a href="https://www.aspistrategist.org.au/high-time-for-australia-and-india-to-step-up-their-tech-diplomacy/"&gt;&lt;strong&gt;The Strategist&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;span&gt; &lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;By Arindrajit Basu with inputs from  and review by Amrita Sengupta and Isha Suri&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Later this month, UN member states elected  American candidate Doreen Bogdan-Martin "&lt;/span&gt;&lt;a href="https://www.brookings.edu/blog/techtank/2022/08/12/the-most-important-election-you-never-heard-of/"&gt;the most important election you have never heard off&lt;/a&gt;&lt;span&gt;" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was  fiercely contested with  Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “&lt;/span&gt;&lt;a href="https://www.lowyinstitute.org/the-interpreter/election-future-internet"&gt;future of the internet”&lt;/a&gt;&lt;span&gt; through the technical standards that underpin it. The  “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment  is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;What are standards and why do they matter&lt;/strong&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding  unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as  the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the &lt;a href="https://www.wto.org/english/tratop_e/tbt_e/tbt_e.htm"&gt;Technical Barriers to Trade&lt;/a&gt; Agreement)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics &lt;a href="https://www.lightreading.com/5g/nokia-boasts-of-essential-5g-patents-milestone/d/d-id/773445"&gt;estimated&lt;/a&gt; that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector &lt;a href="https://asia.nikkei.com/Politics/International-relations/China-leads-the-way-on-global-standards-for-5G-and-beyond"&gt;participation &lt;/a&gt;(Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei &lt;a href="https://insights.greyb.com/company-with-most-5g-patents/"&gt;owning&lt;/a&gt; the most number of 5G patents and has &lt;a href="https://www.cfr.org/blog/china-huawei-5g"&gt;finalised&lt;/a&gt; more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Standards for Artificial Intelligence&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A  number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 &lt;a href="https://cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper-2021-edition/"&gt;AI standardisation white paper&lt;/a&gt; that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology  being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Introducing a new CIS-ASPI project&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of  a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology  palatable across the world even if the technology  undermines core democratic values such as privacy, and anti-discrimination. China’s&lt;a href="https://www.ft.com/content/c3555a3c-0d3e-11ea-b2d6-9bf4d1957a67"&gt; efforts&lt;/a&gt; at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit &lt;/span&gt;&lt;a href="http://www.aspi.org.au/techdiplomacy"&gt;www.aspi.org.au/techdiplomacy&lt;/a&gt;&lt;span&gt; and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific'&gt;https://cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>arindrajit</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2022-10-21T17:16:10Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work">
    <title>AI in the Future of Work</title>
    <link>https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work</link>
    <description>
        &lt;b&gt;Artificial Intelligence and allied technologies form part of what is being called the fourth Industrial Revolution.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Some analysts &lt;a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf"&gt;project the loss of jobs&lt;/a&gt; as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will &lt;a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf"&gt;enhance and complement&lt;/a&gt; human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from &lt;a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf"&gt;machine-to-machine interactions on the factory floor&lt;/a&gt;, to automated decision-making systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some analysts &lt;a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf"&gt;project the loss of jobs&lt;/a&gt; as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will &lt;a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf"&gt;enhance and complement&lt;/a&gt; human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from &lt;a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf"&gt;machine-to-machine interactions on the factory floor&lt;/a&gt;, to automated decision-making systems.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Studying the Platform Economy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The platform economy, in particular, is dependent on AI in the design of aggregator platforms that form a two-way market between customers and workers. Platforms deploy AI at a number of different stages, from recruitment to assignment of tasks to workers. AI systems often reflect existing social biases, as they are built using biased datasets, and by non-diverse teams that are not attuned to such biases. This has been the case in the platform economy as well, where biased systems impact the ability of marginalised workers to access opportunities. To take an example, Amazon’s algorithm to filter workers’ resumes was &lt;a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G"&gt;biased against women&lt;/a&gt; because it was trained on 10 years of hiring data, and ended up reflecting the underrepresentation of women in the tech industry. That is not to say that algorithms introduce biases where they didn’t exist earlier, but that they take existing biases and hard code them into systems in a systematic and predictable manner.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Biases are made even more explicit in marketplace platforms, that allow employers to review workers’ profiles and skills for a fee. In a study of platforms offering home-based services in India, we found that marketplace platforms offer filtering mechanisms which allow employers to filter workers by demographic characteristics such as gender, age, religion, and in one case, caste (the research publication is forthcoming). The design of the platform itself, in this case, encourages and enables discrimination of workers. One of the leading platforms in India had ‘Hindu maid’ and ‘Hindu cook’ as its top search term, reflecting the ways in which employers from the dominant religion are encouraged to discriminate against workers from minority religions in the Indian platform economy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another source of bias in the platform economy are rating and pricing systems, which can reduce the quality and quantum of work offered to marginalised workers. Rating systems exist across platform types - those that offer on-demand or location-based work, microwork platforms, and marketplace platforms. They allow customers and employers to rate workers on a scale, and are most often one-way feedback systems to review a worker’s performance (as our forthcoming research discusses, we found very few examples of feedback loops that also allow workers to rate employers). Rating systems &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;have been found&lt;/a&gt; to be a source of anxiety for workers, as they can be rated poorly for unfair reasons, including their demographic characteristics. Most platforms penalise workers for poor ratings, and may even stop them from accessing any tasks at all if their ratings fall below a certain threshold. Without adequate grievance redressal mechanisms that allow workers to contest poor ratings, rating systems are prone to reflect customer biases while appearing neutral. It is difficult to assess the level of such bias without companies releasing data comparing ratings of workers by their demographic characteristics, but it &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;has been argued&lt;/a&gt; that there is ample evidence to believe that demographic characteristics will inevitably impact workers ratings due to widespread biases.&lt;/p&gt;
&lt;h3&gt;Searching for a Solution&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;It is clear that platform companies need to be pushed into solving for biases and making their systems more fair and non-discriminatory. Some companies, such as Amazon in the example above, have responded by suspending algorithms that are proven to be biased. However, this is a temporary fix, as companies rarely seek to drop such projects indefinitely. In the platform economy, where algorithms are central to the business model of companies, complete suspension is near impossible. Amazon also tried another quick fix - it &lt;a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G"&gt;altered the algorithm&lt;/a&gt; to respond neutrally to terms such as ‘woman’. This is a process known as debiasing the model, through which any biased connections (such as between the word ‘woman’ and downgrading) being made by the algorithm are explicitly removed. Another solution is diversifying or debiasing datasets. In this example, the algorithm could be fed a larger sample of resumes and decision-making logics from industries that have a higher representation of women.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another set of solutions could be drawn from anti-discrimination law, which prohibit discrimination at the workplace. In India, anti-discrimination laws protect against wage inequality, as well as discrimination at the stage of recruitment for protected groups such as transgender persons. While it can be argued that biased rating systems lead to wage inequality, there are several barriers to applying anti-discrimination law for workers in the platform economy. One, most jurisdictions, including India, protect only employees from discrimination, not self-employed contractors. Another challenge is the lack of data to prove that rating or recruitment algorithms are discriminatory, without which legal recourse is impossible. &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;Rosenblat et al.&lt;/a&gt; (2016) discuss these challenges in the context of the US, suggesting solutions such as addressing employment misclassification or modifying pleading requirements to bring platform workers under the protection of the law.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Feminist principles point to structural shifts that are required to ensure robust protections for workers. Analysing algorithmic systems from a feminist lens indicates several points in the design at which interventions must be focused to ensure impact. The teams designing algorithms need to be made more diverse, along with integrating an explicit focus on assessing the impact of systems at the stage of design. Companies need to be more transparent with their data, and encourage independent audits of their systems. Corporate and government actors must be held to account to fix broken AI systems.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Ambika Tandon is a Senior Researcher at the &lt;a href="https://cis-india.org/"&gt;Centre for Internet &amp;amp; Society (CIS)&lt;/a&gt; in India, where she studies the intersections of gender and technology. She focuses on women’s work in the digital economy, and the impact of emerging technologies on social inequality. She is also interested in developing feminist methods for technology research. Ambika tweets at &lt;a href="https://twitter.com/AmbikaTandon"&gt;@AmbikaTandon&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The blog was originally &lt;a class="external-link" href="https://ethicalsource.dev/blog/ai-in-the-future-of-work/"&gt;published in the Organization for Ethical Source&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work'&gt;https://cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>ambika</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>CISRAW</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Future of Work</dc:subject>
    

   <dc:date>2021-12-07T01:51:42Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services">
    <title>Roundtable on A.I. and Manufacturing and Services</title>
    <link>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services</link>
    <description>
        &lt;b&gt;The Centre for Internet and Society (CIS), Bangalore is organizing a roundtable on ‘A.I. and Manufacturing and Services’ on the 19th of January, 2018 from 2 to 5 pm at ‘The Energy and Resources Institute’ (TERI) Bangalore. The Roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies on manufacturing processes and services.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Since the Industrial Revolution machines have substituted human labour and helped industries save time and money. This was succeeded by the advent of computers and technology which helped in completing tasks with better speed and accuracy than the human brain. The emergence of machine-learning technology and artificial intelligence has now made machines capable of doing work that was earlier considered to be something that could only be done by humans. From the use of AI in understanding customer shopping trends to its use in making automobiles, AI is becoming more of a norm than an exception. The analytics of how customers shop is now helping companies forecast their manufacturing needs. The synergy of technology and machines i.e. smart manufacturing, not only changes manufacturing and shipping but also improves worker safety. Different forms of smart manufacturing are also starting to come up in India: Wipro and Infosys have launched AI platforms, and the Indian Institute of Science is developing a smart factory with support from Boeing Company and General Electric. Infosys has also released an AI platform, ‘Nia’, which is programmed to forecast revenue and understand customer behaviour.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The instances of use of machines to substitute human workforce, in some cases, has brought about a sense of worry. Recent trends in factory hiring show that jobs are being lost to automated forms of labour, further evidenced by a report from the research firm HorsesforSources, which predicts that India is set to lose 640,000 low-skilled job positions to automation by the year 2021.The IT sector in India is also under risk from the use of AI. Reports have also found that the rising unemployment in the IT sector has led to increased pressure on labour regulators.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although there are some studies that state that the use of AI would bring about a market for people who would need to work along with AI, the FICCI and EY’s 2016 Report on the Future of jobs and its implication on Indian higher education suggests that one of the ways to combat the loss of jobs was reskilling and upskilling the labour force. India has taken the first step towards this by launching the National Skill Development Mission.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;From the use of neural networks to monitor steel plants for packing and shipping groceries, the use of intelligent machines has begun disrupting traditional business models in the industry. However, these advancements raise questions around labour, ethics, liability, and machine-human cooperation. Dialogue and debate are needed to understand how AI is being used in manufacturing, the potential benefits, and challenges of the same, and a way forward that optimizes innovation and protects human rights.&lt;/p&gt;
&lt;h2 style="text-align: justify; "&gt;Roundtable Agenda&lt;/h2&gt;
&lt;p&gt;Friday 19th January | 2:00 p.m - 5:00 p.m.&lt;/p&gt;
&lt;div id="_mcePaste"&gt;2:00 - 2:30 Introduction and setting the scene&lt;/div&gt;
&lt;div id="_mcePaste"&gt;2:30 - 3:30 Discussion on the AI landscape in the manufacturing and services industry:&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Manner and extent of integration of AI into manufacturing and services&lt;/li&gt;
&lt;li&gt;Relevant stakeholders and their roles in implementing AI in manufacturing and services&lt;/li&gt;
&lt;li&gt;Future of AI and related technologies in AI in manufacturing and services &lt;/li&gt;
&lt;li&gt;Impact on work and labour&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;3:30 - 4:30 Discussion on challenges and solutions towards regulating AI in India:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Challenges faced in the conception and implementation of the AI product/ service, and reasons for such challenges.&lt;/li&gt;
&lt;li&gt;Regulatory provisions for implementation of AI in the manufacturing and services under the existing laws, and need for reforms.&lt;/li&gt;
&lt;li&gt;Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;4.30 - 5.00 Conclusion and way forward&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services'&gt;https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-01-18T13:44:15Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india">
    <title>Roundtable on AI and Finance in India</title>
    <link>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india</link>
    <description>
        &lt;b&gt;Centre for Internet &amp; Society (CIS) will hold a roundtable on artificial intelligence and finance in India on Wednesday, February 7, 2018 in association with HasGeek and the 50p Conference. The roundtable will take place from 2 p.m. to 5 p.m at TERI (The Energy Resources Institute) in Domlur, Bengaluru.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;We invite you all to participate in this roundtable to share and build knowledge about trajectories of AI deployment across sub-sectors of banking in India and the emergent regulatory and public policy concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The objective of the roundtable is to bring together various actors active across the fields of artificial intelligence, machine learning, cognitive computing, financial technologies,and big data credit scoring and online lending, to discuss pressing public policy issues in regards to the utilisation and implementation of AI in the banking and finance sectors of India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These sectors currently find themselves at the early stages of AI adoption. Such technologies are being implemented to facilitate both front-end and back-end processes by a variety of players with the aim of improving the accessibility, customised user engagement, and quality of current financial services. Leading commercial banks in India have all been working to develop and deploy AI technologies either in house or in partnership with small and large-scale tech companies. Such initiatives have seen the deployment of numerous chatbots and humanoid robots for the purposes of customer service. More significant, however, is the use of such technology by banks and fintech actors to facilitate decision making behind the scenes, on a variety of financial issues including but not limited to credit-worthiness, fraud detection, and investments.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While these sectors are no strangers to the use of big data analytics and similar technologies in aiding with financial decision making and daily operations, the deployment of technologies such as machine learning and natural language processing is still very new. Due to the nascent nature of this phenomenon, little is known about the details of their implications for both producers and consumers. Furthermore, concerns regarding data ownership, liability, and consumer rights have all been raised in light of AI adoption. This roundtable will present us with an opportunity to discuss such issues and begin to fill this knowledge gap.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For agenda and event brochure &lt;strong&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-finance"&gt;click here&lt;/a&gt;. &lt;/strong&gt;For RSVP &lt;a class="external-link" href="https://docs.google.com/forms/d/e/1FAIpQLSd1QFN8a5R3FPPLklDR0XQb1izzGFWzWtAilI5-UNO4EApAFQ/viewform"&gt;click here&lt;/a&gt;. Read the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/draft-roundtable-report-on-ai-and-banking"&gt;event report here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india'&gt;https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>saman</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-03-11T14:58:55Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective">
    <title>What is the problem with ‘Ethical AI’? An Indian Perspective</title>
    <link>https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective</link>
    <description>
        &lt;b&gt;On 22 May 2019, the OECD member countries adopted the OECD Council Recommendation on Artificial Intelligence. The Principles, meant to provide an “ethical framework” for governing Artificial Intelligence (AI), were the first set of guidelines signed by multiple governments, including non-OECD members: Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Arindrajit Basu and Pranav M.B. was &lt;a class="external-link" href="https://cyberbrics.info/what-is-the-problem-with-ethical-ai-an-indian-perspective/"&gt;published by cyberBRICS&lt;/a&gt; on July 17, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;This was followed by the &lt;a href="https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf" rel="noreferrer noopener" target="_blank"&gt;G20 adopted human-centred AI Principles&lt;/a&gt; on June 9th. These are the latest in a slew of (&lt;a href="https://clinic.cyber.harvard.edu/2019/06/07/introducing-the-principled-artificial-intelligence-project/" rel="noreferrer noopener" target="_blank"&gt;at least 32!&lt;/a&gt;) public, and private ‘Ethical AI’ initiatives that seek to use ethics to guide the development, deployment and use of AI in a variety of use cases. They were conceived as a response to a range of concerns around algorithmic decision-making, including discrimination, privacy, and transparency in the decision-making process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In India, a noteworthy recent document that attempts to address these concerns is the &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;National Strategy for Artificial Intelligence&lt;/a&gt; published by the National Institution for Transforming India, also called &lt;em&gt;NITI Aayog&lt;/em&gt;, in June 2018. As the NITI Aayog Discussion paper acknowledges, India is the fastest growing economy with the second largest population in the world and has a significant stake in understanding and taking advantage of the AI revolution. For these reasons the goal pursued by the strategy is to establish the National Program on AI, with a view to guiding the research and development in new and emerging technologies, while addressing questions on ethics, privacy and security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While such initiatives and policy measures are critical to promulgating discourse and focussing awareness on the broad socio-economic impacts of AI, we fear that they are dangerously conflating tenets of existing legal principles and frameworks, such as human rights and constitutional law, with ethical principles – thereby diluting the scope of the former. While we agree that ethics and law can co-exist, ‘Ethical AI’ principles are often drafted in a manner that posits as voluntary positive obligations various actors have taken upon themselves as opposed to legal codes they necessarily have to comply with.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;To have optimal impact, ‘Ethical AI’ should serve as a decision-making framework only in specific instances when human rights and constitutional law do not provide a ready and available answer.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Vague and unactionable&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Conceptually, ‘Ethical AI’ is a vague set of principles that are often difficult to define objectively. In this perspective, academics like Brett Mittelstadt of the Oxford Internet Institute &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293" rel="noreferrer noopener" target="_blank"&gt;argues&lt;/a&gt; that unlike in the field of medicine – where ethics has been used to design a professional code, ethics in AI suffers from four core flaws. First, developers lack a common aim or fiduciary duty to a consumer, which in the case of medicine is the health and well-being of the patient. Their primary duty lies to the company or institution that pays their bills, which often prevents them from realizing the extent of the moral obligation they owe to the consumer.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second is a lack of professional history which can help clarify the contours of well-defined norms of ‘good behaviour.’ In medicine, ethical principles can be applied to specific contexts by considering what similarly placed medical practitioners did in analogous past scenarios. Given the relative nascent emergence of AI solutions, similar professional codes are yet to develop.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Third is the absence of workable methods or sustained discourse on how these principles may be translated into practice. Fourth, and we believe most importantly, in addition to ethical codes, medicine is governed by a robust and stringent legal framework and strict legal and accountability mechanisms, which are absent in the case of ‘Ethical AI’. This absence gives both developers and policy-makers large room for manoeuvre.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, such focus on ethics may be a means of avoiding government regulation and the arm of the law. Indeed, due to its inherent flexibility and non-binding nature, ethics can be exploited as a piecemeal red herring solution to the problems posed by AI. Controllers of AI development are often profit-driven private entities, that gain reputational mileage by using the opportunity to extensively deliberate on broad ethical notions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Under the guise of meaningful ‘self-regulation’, several organisations publish internal ‘Ethical AI’ guidelines and principles, and &lt;a href="https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics"&gt;fund ethics research&lt;/a&gt; across the globe. In doing so, they occlude the shackles of binding obligation and deflect from attempts at tangible regulation.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Comparing Law to Ethics&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;This is in contrast to the well-defined jurisprudence that human rights and constitutional law offer, which should serve as the edifice of data-driven decision making in any context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the table below, we try to explain this point by looking at how three core fundamental rights enshrined both in our constitution and human rights instruments across the globe-right to privacy, right to equality/right against discrimination and due process-find themselves captured in three different sets of ‘Ethical AI frameworks.’ One of these inter-governmental &lt;a href="https://www.oecd.org/going-digital/ai/principles/" rel="noreferrer noopener" target="_blank"&gt;(OECD)&lt;/a&gt;, one devised by a private sector actor (‘&lt;a href="https://ai.google/principles/" rel="noreferrer noopener" target="_blank"&gt;Google AI&lt;/a&gt;’) and one by our very own, &lt;a href="https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf" rel="noreferrer noopener" target="_blank"&gt;NITI AAYOG.&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cyberbrics.info/wp-content/uploads/2019/07/image.png" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With the exception of certain principles,most ‘Ethical AI’ principles are loosely worded as ‘‘seek to avoid’, ‘give opportunity for’, or ‘encourage’. A notable exception is the NITI AAYOG’s approach to protecting privacy in the context of AI. The document explicitly recommends the establishment of a national data protection framework for data protection, sectoral regulations that apply to specific contexts with the consideration of international standards such as GDPR as benchmarks. However, it fails to reference available constitutional standards when it discusses bias or explainability.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Several similar legal rules that have been enshrined in legal provisions -outlined and elucidated through years of case law and academic discourse – can be utilised to underscore and guide AI principles. However, existing AI principles do not adequately articulate how the legal rule can actually be applied to various scenarios by multiple organisations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We do not need a new “Law of Artificial Intelligence” to regulate this space. Judge Frank Easterbrook’s famous 1996 proclamation on the &lt;a href="https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?referer=&amp;amp;httpsredir=1&amp;amp;article=2147&amp;amp;context=journal_articles"&gt;‘Law of the Horse’&lt;/a&gt; through which he opposed the creation of a niche field of ‘cyberspace law’ comes to mind. He argued that a multitude of legal rules deal with ‘horses’, including the sale of horses, individuals kicked by horses, and with the licensing and racing of horses. Like with cyberspace, any attempt to arrive at a corpus of specialised ‘law of the horse’ would be shallow and ineffective.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Instead of fidgeting around for the next shiny regulatory tool, industry, practitioners, civil society and policy makers need to get back to the drawing board and think about applying the rich corpus of existing jurisprudence to AI governance.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;What is the role for ‘Ethical AI?’&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;What role can ‘ethical AI’ then play in forging robust and equitable governance of Artificial Intelligence? As it does in all other societal avenues, ‘ethical AI’ should serve as a framework for making legitimate algorithmic decisions in instances where law might not have an answer. An example of such a scenario is the &lt;a href="https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/" rel="noreferrer noopener" target="_blank"&gt;Project Maven saga&lt;/a&gt; – where 3,000 Google employees signed a petition opposing Google’s involvement with a US Department of Defense project by claiming that Google should not be involved in “the business of war.” There is no law-international or domestic that suggests that Project Maven-which was designed to study battlefield imagery using AI, was illegal. However, the debate at Google proceeded on ethical grounds and on the application of the ‘Ethical AI’ principles to this present context.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We realise the importance of social norms and mores in carving out any regulatory space. We also appreciate the role of ethics in framing these norms for responsible behaviour. However, discourse across civil society, academic, industry and government circles all across the globe needs to bring law back into the discussion as a framing device. Not doing so risks diluting the debate and potential progress to a set of broad, unactionable principles that can easily be manipulated for private gain at the cost of public welfare.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective'&gt;https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Pranav M.B.</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T14:57:08Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward">
    <title>Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward</title>
    <link>https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward</link>
    <description>
        &lt;b&gt;On July 13, 2019, Radhika Radhakrishnan, participated in a roundtable discussion on "Emerging AI technology in health care in India, health equity and justice: Critical reflections and charting out way forward." The event was organized by HEaL (Health, Ethics, and Law Institute of Training, Research and Advocacy) of FMES (Forum for Medical Ethics Society) in collaboration with CPS (Centre for Policy Studies), Indian Institute of Technology-Bombay.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Radhika chaired a session on the ethics of AI in healthcare in India,       and my main submissions included: the medicalization of and       experimentation on women's bodies under a medical-industrial       complex for the design of AI-based healthcare models, and FAT       (Fairness, Accountability, Transparency) concerns with AI. She was also invited to draft some of this content into a       paper submission to the &lt;a href="https://ijme.in/"&gt;Indian Journal of Medical Ethics&lt;/a&gt; which is a peer-reviewed and indexed academic journal run by FMES.&lt;span&gt; &lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward'&gt;https://cis-india.org/internet-governance/news/emerging-ai-technology-in-health-care-in-india-health-equity-and-justice-critical-reflections-and-charting-out-way-forward&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-07-21T15:47:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india">
    <title>Rethinking the intermediary liability regime in India</title>
    <link>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</link>
    <description>
        &lt;b&gt;The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December. 

&lt;/b&gt;
        &lt;p&gt;The blog post by Torsha Sarkar was &lt;a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/"&gt;published by CyberBRICS&lt;/a&gt; on August 12, 2019.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 style="text-align: justify; "&gt;Introduction&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft&lt;a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf"&gt;version&lt;/a&gt; of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters &lt;a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank"&gt;reported&lt;/a&gt; that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary&lt;a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank"&gt;statement of reasons&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including &lt;a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank"&gt;Google&lt;/a&gt;) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In March this year, Mark Zuckerberg wrote an&lt;a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank"&gt;op-ed&lt;/a&gt; for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious &lt;a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank"&gt;NetzDG&lt;/a&gt; aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Broad, thematic concerns with the Rules&lt;/h3&gt;
&lt;h3 style="text-align: justify; "&gt;A uniform mechanism of compliance&lt;/h3&gt;
&lt;h3 style="text-align: justify; "&gt;Timelines&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Rule 3(8) of the Guidelines mandates intermediaries, prompted by &lt;em&gt;a&lt;/em&gt; &lt;em&gt;court order or a government notification&lt;/em&gt;, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for &lt;em&gt;all&lt;/em&gt; intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has &lt;a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank"&gt;commented&lt;/a&gt;, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can &lt;a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank"&gt;regulation be made&lt;/a&gt; with only the tech incumbents in mind. While platforms like YouTube can comfortably &lt;a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank"&gt;remove&lt;/a&gt; criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay &lt;a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank"&gt;less heed&lt;/a&gt; to the technical requirements or the correct legal procedures required for content takedown.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Utilization of ‘automated technology’&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping &lt;a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank"&gt;100 million dollars investment by 2018&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not &lt;a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank"&gt;infallible&lt;/a&gt;. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through &lt;em&gt;all&lt;/em&gt; these categories of ‘unlawful content’ at one go.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;The problems of AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank"&gt;Except, that is not how it should work&lt;/a&gt;. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be&lt;a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&amp;amp;httpsredir=1&amp;amp;article=1704&amp;amp;context=ndlr" rel="noreferrer noopener" target="_blank"&gt;understandable&lt;/a&gt; by a machine.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Additionally, the training data used to feed the system &lt;a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank"&gt;can be biased&lt;/a&gt;. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘&lt;a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank"&gt;black box&lt;/a&gt;’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;What can be done?&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The &lt;a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank"&gt;UK White Paper on Harms&lt;/a&gt;, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly&lt;a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank"&gt;arbitrate&lt;/a&gt; disputes regarding content removal, is finding traction in several parallel developments.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘&lt;a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank"&gt;internet court&lt;/a&gt;’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There is no&lt;a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"&gt; silver bullet&lt;/a&gt; when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and &lt;em&gt;laissez-faire&lt;/em&gt;autonomy, is worth considering.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'&gt;https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>torsha</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-16T01:49:47Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/unescap-google-ai-meeting">
    <title>UNESCAP Google AI Meeting</title>
    <link>https://cis-india.org/internet-governance/news/unescap-google-ai-meeting</link>
    <description>
        &lt;b&gt;Arindrajit was a panelist at the event on AI in public service delivery hosted by UNESCAP Bangkok on August 29, 2018. The event was co-organized by Economic and Social Commission for Asia and the Pacific and Google.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The discussion centered around the two questions (1) Is AI different from other technological advancements in the past and (2) Recommendations for policy-makers to enhance AI in Public Service Delivery.The other panelists were Dr. Urs Gasser (Berkman), Vidushi Marda ( Art.19), Malavika Jayaram (Digital  Asia Hub) and Jake Lucchi ( Google) The panel was a platform to discuss some of our findings in our case studies on healthcare and agriculture, which we will receive comments on and will get published in November.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/unescap-google-ai-meeting'&gt;https://cis-india.org/internet-governance/news/unescap-google-ai-meeting&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-09-20T15:47:42Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age">
    <title>Confidentiality of Communications and Privacy of Data in the Digital Age</title>
    <link>https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age</link>
    <description>
        &lt;b&gt;On September 25, 2018, Elonnai Hickok participated in a side event Confidentiality of Communications and Privacy of Data in the Digital Age organized by INCLO and Privacy International at the Human Rights Council 39th ordinary session. Elonnai spoke on artificial intelligence and privacy.&lt;/b&gt;
        
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age'&gt;https://cis-india.org/internet-governance/news/confidentiality-of-communications-and-privacy-of-data-in-the-digital-age&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>praskrishna</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-10-28T06:02:07Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>




</rdf:RDF>
