<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 1 to 15.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/practicing-feminist-principles"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/cisxscholars-harsh-gupta-machine-learning-for-lawyers-and-lawmakers-20170629"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-in-healthcare"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework">
    <title>The AI Task Force Report - The first steps towards India’s AI framework </title>
    <link>https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework</link>
    <description>
        &lt;b&gt;The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post was edited by Swagam Dasgupta. &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-task-force-report.pdf"&gt;Download &lt;strong&gt;PDF&lt;/strong&gt; here&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;span style="text-align: justify; "&gt;The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors&lt;/span&gt;&lt;a name="_ftnref1"&gt;&lt;/a&gt;&lt;span style="text-align: justify; "&gt; and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver,&lt;/span&gt;&lt;a name="_ftnref2"&gt;&lt;/a&gt;&lt;span style="text-align: justify; "&gt; the Report attempts to answer three policy questions:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are the areas where government should play a role?&lt;/li&gt;
&lt;li&gt;How can AI improve quality of life and solve problems at scale for Indian citizens?&lt;/li&gt;
&lt;li&gt;What are the sectors that can generate employment and growth by the use of AI technology?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="text-align: justify; "&gt;This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Sectors of Relevance and Challenges&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education.&lt;a name="_ftnref3"&gt;&lt;/a&gt; While these ten domains are part of the 16 domains of focus listed in the AITF’s web page,&lt;a name="_ftnref4"&gt;&lt;/a&gt; it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.&lt;a name="_ftnref5"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data.&lt;/span&gt;&lt;a name="_ftnref6"&gt;&lt;/a&gt;&lt;span&gt; The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges.&lt;/span&gt;&lt;a name="_ftnref7"&gt;&lt;/a&gt;&lt;span&gt; The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.&lt;/span&gt;&lt;a name="_ftnref8"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies.&lt;a name="_ftnref9"&gt;&lt;/a&gt; In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.&lt;a name="_ftnref10"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards.&lt;a name="_ftnref11"&gt;&lt;/a&gt; In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.&lt;a name="_ftnref12"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived.&lt;a name="_ftnref13"&gt;&lt;/a&gt; Therefore, AI could have a large positive impact through inclusive growth and empowerment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics.&lt;a name="_ftnref14"&gt;&lt;/a&gt; Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth.&lt;a name="_ftnref15"&gt;&lt;/a&gt; The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.&lt;/p&gt;
&lt;h3&gt;Areas in need of Government Intervention&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion.&lt;a name="_ftnref16"&gt;&lt;/a&gt; While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Participation in International Forums&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Key Enablers to AI&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI.&lt;a name="_ftnref17"&gt;&lt;/a&gt; Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Data and Data Marketplaces&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court,&lt;a name="_ftnref18"&gt;&lt;/a&gt; the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.&lt;a name="_ftnref19"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it.&lt;/span&gt;&lt;a name="_ftnref20"&gt;&lt;/a&gt;&lt;span&gt; It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;strong&gt;Innovation and Patents&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes.&lt;a name="_ftnref21"&gt;&lt;/a&gt; If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted.&lt;a name="_ftnref22"&gt;&lt;/a&gt; Given the controversies over CRIs that have dotted the Indian patent landscape&lt;a name="_ftnref23"&gt;&lt;/a&gt;, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Ethical AI framework&lt;/span&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Responsible AI&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Privacy and Data&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Report also recognises the significance of the implementation of the Aadhaar Act&lt;a name="_ftnref24"&gt;&lt;/a&gt;, the privacy judgement&lt;a name="_ftnref25"&gt;&lt;/a&gt; and the proposed data protection laws&lt;a name="_ftnref26"&gt;&lt;/a&gt;, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data.&lt;a name="_ftnref27"&gt;&lt;/a&gt; Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog,&lt;a name="_ftnref28"&gt;&lt;/a&gt; this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI.&lt;a name="_ftnref29"&gt;&lt;/a&gt; Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.&lt;a name="_ftnref30"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;&lt;strong&gt;Liability&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;AI and Employment &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk&lt;a name="_ftnref31"&gt;&lt;/a&gt;, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes.&lt;a name="_ftnref32"&gt;&lt;/a&gt; Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Education and Re-skilling&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component.&lt;a name="_ftnref33"&gt;&lt;/a&gt; Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Policy Recommendations &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Task Force&lt;sub&gt;,&lt;/sub&gt; in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;National Artificial Intelligence Mission&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups.&lt;a name="_ftnref34"&gt;&lt;/a&gt; The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment.&lt;a name="_ftnref35"&gt;&lt;/a&gt; Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge&lt;a name="_ftnref36"&gt;&lt;/a&gt; and Japan’s sandbox for self driving trucks.&lt;a name="_ftnref37"&gt;&lt;/a&gt; The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.&lt;a name="_ftnref38"&gt;&lt;/a&gt;Additionally, government engagement in AI projects has thus far been fragmented&lt;a name="_ftnref39"&gt;&lt;/a&gt;and a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition&lt;a name="_ftnref40"&gt;&lt;/a&gt; that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Other recommendations&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.”&lt;a name="_ftnref41"&gt;&lt;/a&gt; Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The recommendations propose establishing operation standards for data storage and  privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’&lt;a name="_ftnref42"&gt;&lt;/a&gt; needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The inconspicuous release of the Report as well as the lack of a call for public comments&lt;a name="_ftnref43"&gt;&lt;/a&gt; results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Compared to the recent Niti Aayog Discussion Paper&lt;/span&gt;&lt;a name="_ftnref44"&gt;&lt;/a&gt;&lt;span&gt;, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn1"&gt;&lt;/a&gt;&lt;span&gt; The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. &lt;/span&gt;&lt;a href="https://www.aitf.org.in/"&gt;https://www.aitf.org.in//&lt;/a&gt;&lt;span&gt; There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn2"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 1,&lt;span&gt;http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn3"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn4"&gt;&lt;/a&gt; The Artificial Intelligence Task Force https://www.aitf.org.in/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn5"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 8&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn6"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 9,10.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn7"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 9&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn8"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn9"&gt;&lt;/a&gt; Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn10"&gt;&lt;/a&gt;Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report   _02.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn11"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 21.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn12"&gt;&lt;/a&gt; Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn13"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 22&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn14"&gt;&lt;/a&gt; Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn15"&gt;&lt;/a&gt; Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn16"&gt;&lt;/a&gt;It is not just technical experts  that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn17"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 31&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn18"&gt;&lt;/a&gt;Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn19"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn20"&gt;&lt;/a&gt; CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn21"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 30&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn22"&gt;&lt;/a&gt; Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn23"&gt;&lt;/a&gt;Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn24"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 37&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn25"&gt;&lt;/a&gt;The Report on the Artificial Intelligence Task Force, Pg. 7&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn26"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn27"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 8&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn28"&gt;&lt;/a&gt; National Strategy for Artificial Intelligence: &lt;a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf"&gt;http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn29"&gt;&lt;/a&gt; Meaningful information and the right to explanation,Andrew D Selbst  Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn30"&gt;&lt;/a&gt; The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn31"&gt;&lt;/a&gt; M-Turk https://www.mturk.com/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn32"&gt;&lt;/a&gt; For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn33"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 41&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn34"&gt;&lt;/a&gt; Report of Artificial Intelligence Task Force Pg, 46, 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn35"&gt;&lt;/a&gt; ibid.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn36"&gt;&lt;/a&gt;The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn37"&gt;&lt;/a&gt;Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn38"&gt;&lt;/a&gt; Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn39"&gt;&lt;/a&gt;&lt;a href="https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977"&gt;https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977&lt;/a&gt;,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn40"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn41"&gt;&lt;/a&gt; Report of Artificial Intelligence Task Force Pg. 49&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn42"&gt;&lt;/a&gt; The Report on the Artificial Intelligence Task Force, Pg. 47&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn43"&gt;&lt;/a&gt; The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;a name="_ftn44"&gt;&lt;/a&gt;National Strategy for Artificial Intelligence: &lt;a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf"&gt;http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework'&gt;https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Shweta Mohandas and Swaraj Paul Barooah</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-06-27T14:32:56Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/practicing-feminist-principles">
    <title>Practicing Feminist Principles</title>
    <link>https://cis-india.org/raw/practicing-feminist-principles</link>
    <description>
        &lt;b&gt;AI can serve to challenge social inequality and dismantle structures of power.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;span&gt;Artificial intelligence systems have been heralded as a tool to purge our systems of social biases, opinions, and behaviour, and produce ‘hard objectivity’. However, on the contrary, it has become evident that AI systems can sharpen inequalities and bias by hard coding it. If left unattended, automated decision-making can be dangerous and dystopian.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;However, when appropriated by feminists, AI can serve to challenge social inequality and dismantle structures of power. There are many routes to such appropriation – resisting authoritarian uses through movement-building and creating our own alternative systems that harness the strength of AI towards achieving social change.&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Feminist principles can be a handy framework to understand and transform the impact of AI systems. Key principles include reflexivity, participation, intersectionality, and working towards structural change.&lt;/strong&gt; When operationalised, these principles can be used to enhance the capacities of local actors and institutions working towards developmental goals. They can also be used to theoretically ground collective action against the use of AI systems by institutions of power.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Reflexivity&lt;/strong&gt; in the design and implementation of AI would imply a check on the privilege and power, or lack thereof, of the various stakeholders involved in an ecosystem. By being reflexive, designers can take steps to account for power hierarchies in the process of design. A popular example of the impact of power differentials is in national statistics. Collected largely by male surveyors speaking to male heads of households, national statistics can often undervalue or misrepresent women’s labour and health. See Data2x. “&lt;a class="external-link" href="https://www.data4sdgs.org/sites/default/files/2017-09/Gender%20Data%20-%20Data4SDGs%20Toolbox%20Module.pdf"&gt;Gender Data: Sources, Gaps, and Measurement Opportunities&lt;/a&gt;,” March 2017 and Statistics Division. “Gender, Statistics and Gender Indicators Developing a Regional Core Set of Gender Statistics and Indicators in Asia and the Pacific.” &lt;a class="external-link" href="https://www.unescap.org/sites/default/files/Framework-and-Indicator-set.pdf"&gt;United Nations Economic and Social Commission for Asia and the Pacific, 2013&lt;/a&gt;. &lt;span&gt;AI systems would need to be reflexive of such gaps and plan steps to mitigate them.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Participation&lt;/strong&gt; as a principle focuses on the process. A participatory process would account for the perspectives and lived experiences of various stakeholders, including those most impacted by its deployment. &lt;strong&gt;In the health ecosystem, for instance, this would include policymakers, public and private healthcare providers, frontline workers, and patients. A health information system with a bottom-up design would account for metrics of success determined by not just high-level organisations such as the World Health Organisation and national governments, but also by providers and frontline workers&lt;/strong&gt;. Among other benefits, participation in designing AI systems also leads to buy-in and ownership of the technology right at the outset, promoting widespread adoption.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Intersectionality&lt;/strong&gt; calls for addressing the social difference in the datasets, design, and deployment of AI. &lt;strong&gt;Research across fields has shown the perpetuation of inequality based on gender, income, race, and other characteristics through AI that is based on biased datasets.&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The most critical principle is to ensure that AI systems are working to challenge inequality, including inequality perpetrated by patriarchal, racist, and capitalist systems. Aligning with feminist objectives means that systems that have objectives that do not align with feminist goals – such as those that enhance state capacities to surveil and police – would immediately be excluded. Systems that are designed to exclude and oppress will not work to further feminist goals, even if they integrate other progressive elements such as intersectional datasets or dynamic consent architecture (which would allow users to opt in and out easily).&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We must work towards decreasing social inequality and achieve egalitarian outcomes in and through its practice. Thus, while explicitly feminist projects such as those that produce better datasets or advocate for participatory mechanisms are of course practicing this principle, I would argue that it is also practiced by any project that furthers feminist goals. Take for example AI projects that aim to reduce hate speech and misinformation online. Given that women and other marginalised groups are often at the receiving end of violence, such work can be classified as feminist even if it doesn’t actively target gender-based violence.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;All technology is embedded in social relations. Practicing feminist principles in the design of AI only serves to account for these social relations and design better, more robust systems. &lt;strong&gt;Feminist practitioners can mobilise these to ensure a future of AI with inclusive, community-owned, participatory systems, combined with collective challenges to systems of domination.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;References&lt;/h3&gt;
&lt;p&gt;Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14, no. 3 (1988): 575–99. https://doi.org/10.2307/3178066.&lt;/p&gt;
&lt;p&gt;Link to the original article &lt;a class="external-link" href="https://feministai.pubpub.org/pub/practicing-feminist-principles/release/1?readingCollection=c218d365"&gt;here&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/practicing-feminist-principles'&gt;https://cis-india.org/raw/practicing-feminist-principles&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>ambika</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Gender, Welfare, and Privacy</dc:subject>
    
    
        <dc:subject>CISRAW</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-12-07T00:54:54Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/cisxscholars-harsh-gupta-machine-learning-for-lawyers-and-lawmakers-20170629">
    <title>CISxScholars Delhi - Harsh Gupta - FAT ML for Lawyers and Lawmakers (June 29, 5:30 pm)</title>
    <link>https://cis-india.org/raw/cisxscholars-harsh-gupta-machine-learning-for-lawyers-and-lawmakers-20170629</link>
    <description>
        &lt;b&gt;We are proud to announce that Harsh Gupta will discuss "FAT ML (Fairness, Accountability, and Transparency in Machine Learning) for Lawyers and Lawmakers" at the CIS office in Delhi on Thursday, June 29, at 5:30 pm. This will be a two and half hour session: beginning with a 45 minute talk, followed by 15 minute break, another talk for 45 minutes, and then a discussion session. Please RSVP if you are joining us: &lt;raw@cis-india.org&gt;. &lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;CISxScholars are informal events organised by CIS for presentation, discussion, and exchange of academic research and policy analysis.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;&lt;strong&gt;FAT ML (Fairness, Accountability, and Transparency in Machine Learning) for Lawyers and Lawmakers&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;From tagging people in photos to determining risk of loan defaults, use of data based tools is affecting more and areas of our lives. In some areas there have been very successful applications of such tools, in others areas they has been found to not only reflect the existing bias and discrimination found in today's society but also exaggerate it.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Harsh Gupta&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Harsh Gupta is a recent graduate from IIT Kharagpur with B.Sc and M.Sc in Mathematics and Computing and will be joining JP Morgan and Chase as a data scientist. He completed his master's thesis in "Discrimination Aware Machine Learning". He was also an intern at The Center for Internet and Society during summer of 2016.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/cisxscholars-harsh-gupta-machine-learning-for-lawyers-and-lawmakers-20170629'&gt;https://cis-india.org/raw/cisxscholars-harsh-gupta-machine-learning-for-lawyers-and-lawmakers-20170629&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sumandro</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>FAT ML</dc:subject>
    
    
        <dc:subject>CISxScholars</dc:subject>
    
    
        <dc:subject>Big Data</dc:subject>
    
    
        <dc:subject>Machine Learning</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2017-06-27T09:16:48Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/policies-for-the-platform-economy">
    <title>Policies for the Platform Economy</title>
    <link>https://cis-india.org/internet-governance/news/policies-for-the-platform-economy</link>
    <description>
        &lt;b&gt;Anubha Sinha and Amber Sinha will be panelists in this event being organized by IT for Change at India Habitat  Centre in New Delhi on August 30, 2019. &lt;/b&gt;
        &lt;p&gt;The agenda for the event &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/agenda-for-policies-for-the-platform-economy"&gt;is here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/policies-for-the-platform-economy'&gt;https://cis-india.org/internet-governance/news/policies-for-the-platform-economy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-08-27T00:19:26Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-in-healthcare">
    <title>AI in Healthcare</title>
    <link>https://cis-india.org/internet-governance/news/ai-in-healthcare</link>
    <description>
        &lt;b&gt;The Center for Information Technology and Public Policy (CITAPP) and the International Institute of Information Technology Bangalore (IIITB) invited Radhika Radhakrishnan for a talk at IIIT-Bangalore on September 13, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In her talk, she  critically questioned the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in India (including the private sector, non-profits, and the Indian State) from a feminist standpoint. Specific to healthcare in India, such a narrative has been employed towards solving development challenges (such as a shortage of medical practitioners in remote regions of the country) through the introduction of AI applications targeted towards the sick-poor. Through her research and fieldwork, she analysed the layers of expropriation and experimentation that come into play when AI technologies become a method of using 'diverse' bodies and medical records of the sick-poor as ‘data’ to train proprietary AI algorithms at a low cost in the absence of effective State regulatory mechanisms. She argued that structural challenges (such as lack of incentives for medical practitioners to join public healthcare) get reframed into opportunities to substitute labour (people) by capital (technology) through innovation of “spectacular technologies” such as AI. Throughout the talk, she also highlighted the methodologies she used to conduct this research.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-in-healthcare'&gt;https://cis-india.org/internet-governance/news/ai-in-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-19T16:15:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today">
    <title>Talks at National University of Juridical Sciences Today</title>
    <link>https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today</link>
    <description>
        &lt;b&gt;Arindrajit Basu delivered two lectures at the National University of Juridical Sciences on September 18, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The first one was part of a symposium being conducted by the soon to be set up Intellectual Property and Technology Law Centre. I spoke on "Conceptualising India's Digital Policy Vision" The other speaker today was  Mr. Supratim Chakraborty (Partner, Khaitan&amp;amp;Co.) Tomorrow's speakers are Prof. Mahendra Kumar Bhandan and Nikhil Narendran (Partner, Trilegal)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The past year has  seen vigorous activity on the domestic  data governance policy front in India. Across key issues including intermediary liability, data localisation and e-commerce, the government has rolled out a patchwork of regulatory policies that has resulted in battle lines being drawn by governments, industry and civil society actors both in India and across the globe. The Data Protection Bill is set to be tabled in the next session of Parliament amidst supposed disagreement among policy-makers on key provisions, including data localization. The draft e-commerce policy and Chapter 4 of the  Economic Survey refer to the concepts of ‘community data’ and ‘data as public  good’ respectively. Artifiicial Intelligence is also the new buzz word among policy-making circles and industry players alike.&lt;br /&gt;&lt;br /&gt;The implementation of each of these concepts have important implications for individual privacy, the monetisation of data by (foreign tech companies) and the harnessing of-as the e-commerce policy puts it-India’s data for India’s development. Meanwhile, at international forums such as the G20, India has partnered up with its BRICS allies to emphasize the notion of ‘data sovereignty’ or the right of each country to govern data within its jurisdiction without external interference.&lt;br /&gt;In his talk, Basu unpacked each of these policies and followed up with a discussion on what these developments meant for Indian citizens and for India’s role in the multilateral global order.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second one was on 'Constitutionalizing Artificial Intelligence' conducted by the Constitutional Law Society. Here, I drew from some preliminary findings from a paper I am working on with Elonnai and Amber.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;b&gt;Abstract&lt;/b&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The use of big data and algorithmic decision-making  has been touted world over as a means of augmenting human capacities, removing bureaucratic fetters and benefiting society. Yet, with concerns arising around bias, fairness and a lack of algorithmic accountability, an entirely new domain of discourse on data justice has emerged - underscoring the idea that algorithms not only have the potential to exacerbate entrenched structural inequality but could also create and modulate new forms of injustice for the vulnerable sections of society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;There is a need for a reflexive turn in the debate on data justice that adequately considers the broader narrative and entrenched inequality in the ecosystem. &lt;/span&gt;&lt;span&gt;Transformative constitutionalism is a new brand of scholarship in comparative constitutional law which celebrates the crucial role of the state and the judiciary in bringing about emancipatory change and rooting out structural inequality.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Originally conceptualized as a Global South concept designed as a counter-model to the individual rights-driven model of Northern Constitutions, scholars have now identified emancipatory provisions in several western constitutions such as Germany. India’s constitution is one such example. The origins of constitutional order in India were designed to “bring the alien and powerful machine like that of the state under the control of human will” and to eliminate the inequality of “status, facilities and opportunities.” &lt;br /&gt;&lt;br /&gt;What is the relevance of India's constitutional ethos in the regulation of modern day data driven decision-making? How can policy-makers use constitutional tenets to mitigate structural injustice and transform the bearings of 21st century Indian society?&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today'&gt;https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-20T14:45:35Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision">
    <title>We need a better AI vision</title>
    <link>https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision</link>
    <description>
        &lt;b&gt;Artificial intelligence conjures up a wondrous world of autonomous processes but dystopia is inevitable unless rights and privacy are protected.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Arindrajit Basu was published by&lt;a class="external-link" href="https://fountainink.in/essay/we-need-a-better-ai-vision-"&gt; Fountainink&lt;/a&gt; on October 12, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;he dawn of Artificial Intelligence (AI) has policy-makers across the globe excited. In India, it is seen as a tool to overleap structural hurdles and better understand a range of organisational and management processes while improving the implementation of several government tasks. Notwithstanding the apparent enthusiasm in the government and private sectors, an adequate technological, infrastructural, and financial capacity to develop these models at scale is still in the works.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A number of policy documents with direct or indirect references to India’s AI future—to be powered by vast troves of data—have been released in the past year and a half. These include the National Strategy for Artificial Intelligence (which I will refer to as National Strategy) authored by NITI Aayog, the AI Taskforce Report, Chapter 4 of the Economic Survey, the Draft e-Commerce Bill and the Srikrishna Committee Report.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While they extol the virtues of data-driven analytics, references to the preservation or augmentation of India’s constitutional ethos through AI has been limited though it is crucial for safeguarding the rights and liberties of citizens while paving the way for the alleviation of societal oppression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this essay, I outline the variety of AI use cases that are in the works. I then highlight India’s AI vision by culling the relevant aspects of policy instruments that impact the AI ecosystem and identify lacunae that can be rectified. Finally, I attempt to “constitutionalise AI policy” by grounding it in a framework of constitutional rights that guarantee protection to the most vulnerable sections of society.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in electronics, heavy electricals and automobiles.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;It is crucial to note that these cases, still emerging in India, have been implemented at scale in other countries such as the United Kingdom, United States and China. Projects were rolled out to the detriment of ethical and legal considerations. Hindsight should make the Indian policy ecosystem much wiser. By closely studying the research produced in these diverse contexts, Indian policy-makers should try to find ways around the ethical and legal challenges that cropped up elsewhere and devise policy solutions that mitigate the concerns raised.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;B&lt;span&gt;efore anything else we need to define AI—an endeavour fraught with multiple contestations. My colleagues and I at the Centre for Internet &amp;amp; Society ducked this hurdle when conducting our research by adopting a function-based approach. An AI system (as opposed to one that automates routine, cognitive or non-cognitive tasks) is a dynamic learning system that allows for the delegation of some level of human decision-making to the system. This definition allows us to capture some of the unique challenges and prospects that stem from the use of AI.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The research I contributed to at CIS identified key trends in the use of AI across India. In healthcare, it is used for descriptive and predictive purposes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For example, the Manipal Group of Hospitals tied up with IBM’s Watson for Oncology to aid doctors in the diagnosis and treatment of seven types of cancer. It is also being used for analytical or diagnostic services. Niramai Health Analytix uses AI to detect early stage breast cancer and Adveniot Tecnosys detects tuberculosis through chest X-rays and acute infections using ultrasound images. In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in the electronics, heavy electricals and automobiles sector gradually adopting and integrating AI solutions into their products and processes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is also used in the burgeoning online lending segment in order to source credit score data. As many Indians have no credit scores, AI is used to aggregate data and generate scores for more than 80 per cent of the population who have no credit scores. This includes Credit Vidya, a Hyderabad-based data underwriting start-up that provides a credit score to first time loan-seekers and feeds this information to big players such as ICICI Bank and HDFC Bank, among others. It is also used by players such as Mastercard for fraud detection and risk management. In the finance world, companies such as Trade Rays are being used to provide user-friendly algorithmic trading services.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The next big development is in law enforcement. Predictive policing is making great strides in various states, including Delhi, Punjab, Uttar Pradesh and Maharashtra. A brainchild of the Los Angeles Police Department, predictive policing is the use of analytical techniques such as Machine Learning to identify probable targets for intervention to prevent crime or to solve past crime through statistical predictions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Conventional approaches to predictive policing start with the mapping of locations where crimes are concentrated (hot spots) by using algorithms to analyse aggregated data sets. Police in Uttar Pradesh and Delhi have partnered with the Indian Space Research Organisation (ISRO) in a Memorandum of Understanding to allow ISRO’s Advanced Data Processing Research Institute to map, visualise and compile reports about crime-related incidents.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are aggressive developments also on the facial recognition front. Punjab Police, in association with Gurugram-based start-up Staqu has started implementing the Punjab Artificial Intelligence System (PAIS) which uses digitised criminal records and automated facial recognition to retrieve information on the suspected criminal. At the national level, on June 28, the National Crime Records Bureau (NCRB) called for tenders to implement a centralised Automated Facial Recognition System (AFRS), defining the scope of work in broad terms as the “supply, installation and commissioning of hardware and software at NCRB.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring. The Andhra Pradesh government had started collecting information from a range of databases and processes the information through Microsoft’s Machine Learning Platform to monitor children and devote student focussed attention on identifying and curbing school drop-outs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In Andhra Pradesh, Microsoft collaborated with the International Crop Institute for Semi-Arid Tropics (ICRISAT) to develop an AI Sowing App powered by Microsoft’s Cortana Intelligence Suite. It aggregated data using Machine Learning and sent advisories to farmers regarding optimal dates to sow. This was done via text messages on feature phones after ground research revealed that not many farmers owned or were able to use smart phones. The NITI Aayog AI Strategy specifically cited this use case and reported that this resulted in a 10-30 per cent increase in crop yield. The government of Karnataka has entered into a similar arrangement with Microsoft.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, in the defence sector, our research found enthusiasm for AI in intelligence, surveillance and reconnaissance (ISR) functions, cyber defence, robot soldiers, risk terrain analysis and moving towards autonomous weapons systems. These projects are being developed by the Defence Research and Development Organisation but the level of trust and support in AI-driven processes reposed by the wings of the armed forces is yet to be publicly clarified. India also had the privilege of leading the global debate on Lethal Autonomous Weapons Systems (LAWS) with Amandeep Singh Gill chairing the United Nations Group of Governmental Experts (UN-GGE) on the issue. However, ‘lethal’ autonomous weapons systems at this stage appear to be a speck in the distant horizon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A&lt;span&gt;long with the range of use cases described above, a patchwork of policy imperatives is emerging to support this ecosystem. The umbrella document is the National Strategy for Artificial Intelligence published by the NITI Aayog in June 2018. Despite certain lacunae in its scope, the existence of a cohesive and robust document that lends a semblance of certainty and predictability to a rapidly emerging sphere is in itself a boon. The document focuses on how India can leverage AI for both economic growth and social inclusion. The contents of the document can be divided into a few themes, many of which have also found their way into multiple other instruments.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;NITI Aayog provides over 30 policy recommendations on investment in scientific research, reskilling, training and enabling the speedy adoption of AI across value chains. The flagship research initiative is a two-tiered endeavour to boost AI research in India. First, new centres of research excellence (COREs) will develop fundamental research. The COREs will act as feeders for international centres for transformational AI which will focus on creating AI-based applications across sectors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/AIinCountries.jpg/@@images/16b4af34-cb6d-423c-be35-e45a60d501cf.jpeg" alt="AI in Countries" class="image-inline" title="AI in Countries" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is an impressive theoretical objective but questions surrounding implementation and structures of operation remain to be answered. China has not only conceptualised an ecosystem but through the Three Year Action Plan to Promote the Development of New Generation Artificial Intelligence Industry, it has also taken a whole-of-government approach to propelling the private sector to an e-leadership position. It has partnered with national tech companies and set clear goals for funding, such as the $2.1 billion technology park for AI research in Beijing.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The contents of the NITI document can be divided into a few themes, many of which have also found their way into multiple other instruments. First, it proposes an “AI+X” approach that captures the long-term vision for AI in India. Instead of replacing the processes in their entirety, AI is understood as an enabler of efficiency in processes that already exist. NITI Aayog therefore looks at the process of deploying AI-driven technologies as taking an existing process (X) and adding AI to them (AI+X). This is a crucial recommendation all AI projects should heed. Instead of waving AI as an all-encompassing magic wand across sectors, it is necessary to identify specific gaps AI can seek to remedy and then devise the process underpinning this implementation.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The AI-driven intervention to develop sowing apps for farmers in Karnataka and Andhra Pradesh are examples of effective implementation of this approach. Instead of other knee-jerk reactions to agrarian woes such as a hasty raising of Minimum Support Price, effective research was done in this use-case to identify a lack of predictability in weather patterns as a key factor in productive crop yields. They realised that aggregation of data through AI could provide farmers with better information on weather patterns. As internet penetration was relatively low in rural Karnataka, text messages to feature phones that had a far wider presence was indispensable to the end game.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;his is in contrast to the ill-conceived path adopted by the Union ministry of electronics and information technology in guidelines for regulating social media platforms that host content (“intermediaries”). Rule 3(9) of the Draft of the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018 mandates intermediaries to use “automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Proposed in light of the fake news menace and the unbridled spread of “extremist” content online, the use of the phrase “automated tools or appropriate mechanisms” is reflective of an attitude that fails to consider ground realities that confront companies and users alike. They ignore, for instance, the cost of automated tools: whether automated content moderation techniques developed in the West can be applied to Indic languages or grievance redress mechanisms users can avail of if their online speech is unduly restricted. This is thus a clear case of the “AI” mantra being drawn out of a hat without studying the “X” it is supposed to remedy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second focus of the National Strategy that has since morphed into a technology policy mainstay across instruments is on data governance, access and utilisation. The document says the major hurdle to the large scale adoption of AI in India is the difficulty in accessing structured data. It recommends developing big annotated data sets to “democratise data and multi-stakeholder marketplaces across the AI value chain”. It argues that at present only one per cent of data can be analysed as it exists in various unconnected silos. Through the creation of a formal market for data, aggregators such as diagnostic centres in the healthcare sector would curate datasets and place them in the market, with appropriate permissions and safeguards. AI firms could use available datasets rather than wasting effort sourcing and curating the sets themselves.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.The first is “community data” and appears both in the Srikrishna Report that accompanied the draft Data Protection Bill in 2018 and the draft e-commerce policy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But there appears to be some conflict between its usage in the two. Srikrishna endorses a collective protection of privacy by protecting an identifiable community that has contributed to community data. This requires the fulfilment of three key conditions: &lt;i&gt;first,&lt;/i&gt; the data belong to an identifiable community; &lt;i&gt;second, &lt;/i&gt;individuals in the community consent to being a part of it, and &lt;i&gt;third&lt;/i&gt;, the community as a whole consents to its data being treated as community data. On the other hand, the Department of Promotion of Industry and Internal Trade’s (DPIIT) draft e-commerce policy looks at community data as “societal commons” or a “national resource” that gives the community the right to access it but government has ultimate and overriding control of the data. This configuration of community data brings into question the consent framework in the Srikrishna Bill.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well-intentioned but is fraught with core problems in implementation.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The matter is further confused by treating “data as a public good”. This is projected in Chapter 4 of the 2019 Economic Survey published by the Ministry of Finance. It explicitly states that any configuration needs to be deferential to privacy norms and the upcoming privacy law. The “personal data” of an individual in the custody of a government is also a “public good” once the datasets are anonymised. At the same time, it pushes for the creation of a government database that links several individual databases, which leads to the “triangulation” problem, where matching different datasets together allows for individuals to be identified despite their anonymisation in seemingly disparate databases.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Building an AI ecosystem” was also one of the ostensible reasons for data localisation—the government’s gambit to mandate that foreign companies store the data of Indian citizens within national borders. In addition to a few other policy instruments with similar mandates, Section 40 of the Draft Personal Data Protection Bill mandates that all “critical data” (this is to be notified by the government) be stored exclusively in India. All other data should have a live, serving copy stored in India even if transfer abroad is allowed. This was an attempt to ensure foreign data processors are not the sole beneficiaries of AI-driven insights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well intentioned but is fraught with core problems in implementation. First, the notion of data as a national resource or as a public good walks a tightrope with constitutionally guaranteed protections around privacy, which will be codified in the upcoming Personal Data Protection Bill. My concerns are not quite so grave in the case of genuine “public data” like traffic signal data or pollution data. However, the Economic Survey manages to crudely amalgamate personal data into the mix.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It also states that personal data in the custody of a government is a public good once the datasets are anonymised. This includes transactions data in the User Payments Interface (UPI), administrative data including birth and death records, and institutional data including data in public hospitals or schools on pupils or patients. At the same time, it pushes for a government database that will lead to the triangulation problem outlined above. The chapter also suggests that said data may be sold to private firms (unclear if this includes foreign or domestic firms). This not only contradicts the notion of public good but is also a serious threat to the confidentiality and security of personal data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;herefore, along with the concerted endeavour to create data marketplaces, it is crucial for policy-makers to differentiate between public data and personal data individuals may consent to be made public. The parameters for clearly defining free and informed consent, as codified in the Draft Personal Data Protection Bill need to be strictly followed as there is a risk of de-anonymisation of data once it finds its way into the marketplace. Second, it is crucial for policy-makers to define clearly a community and parameters for what constitutes individual consent to be part of a community. Finally, along with technical work on setting up a national data marketplace, there must be protracted efforts to guarantee greater security and standards of anonymisation.&lt;/span&gt;&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The National Strategy  mentions that India should position itself as a “garage” for AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their rights.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Assuming that a constitutionally valid paradigm may be created, the excessive focus on data access by tech players dodges the question of the capabilities of analytic firms to process this data and derive meaningful insights from the information. Scholars on China, arguably the poster-child of data-driven economic growth, have sent mixed messages. Ding argues that despite having half the technical capabilities of the US, easy access to data gives China a competitive edge in global AI competition. On the contrary, Andrew Ng has argued that operationalising a sufficient number of relevant datasets still remains a challenge. Ng’s views are backed up by insiders at Chinese tech giant Tencent who say the company still finds it difficult to integrate data streams due to technical hurdles. NITI Aayog’s idea of a multi-stream data marketplace may theoretically be a solution to these potential hurdles but requires sustained funding and research innovation to be converted into reality.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy suggests that government should create a multi-disciplinary committee to set up this marketplace and explore levers for its implementation. This is certainly the need of the hour. It also rightly highlights the importance of research partnerships between academia and the private sector, and the need to support start-ups. There is therefore an urgent need for innovative allied policy instruments that support the burgeoning start-up sector. Proposals such as data localisation may hurt smaller players as they will have to bear the increased fixed costs of setting up or renting data centres.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy also incongruously mentions that India should position itself as a “garage” for the use of AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their fundamental rights. It could also imply that India should occupy a leadership position and work with other emerging economies to frame the global rights based discourse to seek equitable solutions for the application of AI that works to improve the plight of the most vulnerable in society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;O&lt;span&gt;ur constitutional ethos places us in a unique position to develop a framework that enables the actualisation of this equitable vision—a goal the policy instruments put out thus far appear to have missed. While the National Strategy includes a section on privacy, security and ethical implications of AI, it stops short of rooting it in fundamental rights and constitutional principles. As a centralised policy instrument, the National Strategy deserves praise for identifying key levers in the future of India’s AI ecosystem and, with the exception of the concerns I outlined above, it is at par with the policy-making thought process in any other nation.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When we start the process of using constitutional principles for AI governance, we must remember that as per Article 12, an individual can file a writ against the state for violation of a fundamental right if the action is taken under the aegis of a “public function”. To combat discrimination by private actors, the state can enact legislation compelling private actors to comply with constitutional mandates. In July, Rajeev Chandrashekhar, a Rajya Sabha MP, suggested a law to combat algorithmic discrimination along the lines of the Algorithmic Accountability Bill proposed in the US Senate. There are three core constitutional questions along the lines of the “golden triangle” of the Indian Constitution any such legislation will need to answer—those of accountability and transparency, algorithmic discrimination and the guarantee of freedom of expression and individual privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Algorithms are developed by human beings who have their own cognitive biases. This means ostensibly neutral algorithms can have an unintentional disparate impact on certain, often traditionally disenfranchised groups.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the &lt;i&gt;MIT Technology Review&lt;/i&gt;, Karen Hao explains three stages at which bias might creep in. The first stage is the framing of the problem itself. As soon as computer scientists create a deep-learning model, they decide what they want the model to finally achieve. However, frequently desired outcomes such as “profitability”, “creditworthiness” or “recruitability” are subjective and imprecise concepts subject to human cognitive bias. This makes it difficult to devise screening algorithms that fairly portray society and the complex medley of identities, attributes and structures of power that define it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second stage Hao mentions is the data collection phase. Training data could lead to bias if it is unrepresentative of reality or represents entrenched prejudice or structural inequality. For example, most Natural Language Processing systems used for Parts of Speech (POS) tagging in the US are trained on the readily available data sets from the &lt;i&gt;Wall Street Journal&lt;/i&gt;. Accuracy would naturally decrease when the algorithm is applied to individuals—largely ethnic minorities—who do not mimic the speech of the &lt;i&gt;Journal&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hao, the final stage for algorithmic bias is data preparation, which involves selecting parameters the developer wants the algorithm to consider. For example, when determining the “risk-profile” of car owners seeking insurance premiums, geographical location could be one parameter. This could be justified by the ostensibly neutral argument that those residing in inner-city areas with narrower roads are more likely to have scratches on their vehicles. But as inner cities in the US have a disproportionately high number of ethnic minorities or other vulnerable socio-economic groups, “pin code” becomes a facially neutral proxy for race or class-based discrimination.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;he right to equality has been carved into multiple international human rights instruments and into the Equality Code in Articles 14-18 of the Indian Constitution. The dominant approach to interpreting the right to equality by the Supreme Court has been to focus on “grounds” of discrimination under Article 15(1), thus resulting in a lack of recognition of unintentional discrimination and disparate impact.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A notable exception, as constitutional scholar Gautam Bhatia points out, is the case of &lt;i&gt;N.M. Thomas &lt;/i&gt;which pertained to reservation in promotions. Justice Mathew argued that the test for inequality in Article 16(4) is an effects-oriented test independent of the formal motivation underlying a specific act. Justice Krishna Iyer and Mathew also articulated a grander vision wherein they saw the Equality Code as transcending the embedded individual disabilities in class driven social hierarchies. This understanding is crucial for governing data driven decision-making that impacts vulnerable communities. Any law or policy on AI-related discrimination must also include disparate impact within its definition of “discrimination” to ensure that developers think about the adverse consequences even of well-intentioned decisions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI driven assessments have been challenged on grounds of constitutional violations in other jurisdictions. In 2016, the Wisconsin Supreme Court considered the legality of using risk assessment tools such as COMPAS for sentencing criminals. It affirmed the trial court’s findings and held that using COMPAS did not violate constitutional due process standards. Eric Loomis had argued that using COMPAS infringed both his right to an individualised sentence and to accurate information as COMPAS provided data for specific groups and kept the methodology used to prepare the report a trade secret. He additionally argued that the court used unconstitutional gendered assessments as the tool used gender as one of the parameters.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Wisconsin Supreme Court disagreed with Loomis arguing that COMPAS only used publicly available data and data provided by the defendant, which apparently meant Loomis could have verified any information contained in the report. On the question of individualisation, the court argued that COMPAS provided only aggregate data for groups similarly placed to the offender. However, it went on to argue as the report was not the sole basis for a decision by the judge, a COMPAS assessment would be sufficiently individualised as courts retained the discretion and information necessary to disagree.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;By assuming that Loomis could have genuinely verified all the data collected about similarly placed groups and that judges would exercise discretion to prevent the entrenchment of inequalities through COMPAS’s decision-making patterns, the judges ignored social realities. Algorithmic decision-making systems are an extension of unequal decision-making that re-entrenches prevailing societal perceptions around identity and behaviour. An instance of discrimination cannot be looked at as a single instance but as one in a menagerie of production systems that define, modulate and regulate social existence.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The policy-making ecosystem needs, therefore, to galvanise the “transformative” vision of India’s democratic fibre and study existing systems and power structures AI could re-entrench or mitigate. For example, in the matter of bank loans there is a presumption against the credit-worthiness of those working in the informal sector. The use of aggregated decision-making may lead to more equitable outcomes given that there is concrete thought on the organisational structures making these decisions and the constitutional safeguards provided.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most case studies on algorithmic discrimination in Virgina Eubanks’ &lt;i&gt;Automating Inequality &lt;/i&gt;or Safiya Noble’s &lt;i&gt;Algorithms of Oppression&lt;/i&gt; are based on western contexts. There is an urgent need for publicly available empirical studies on pilot cases in India to understand the contours of discrimination. Primary research questions should explore three related subjects. Are specified ostensibly neutral variables being used to exclude certain communities from accessing opportunities and resources or having a disproportionate impact on their civil liberties? Is there diversity in the identities of the coders themselves? Are the training data sets used representative and diverse and, finally, what role does data driven decision-making play in furthering the battle against embedded structural hierarchies?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A key feature of AI-driven solutions is the “black box” that processes inputs and generates actionable outputs behind a veil of opacity to the human operator. Essentially, the black box denotes that aspect of the human neural decision-making function that has been delegated to the machine. A lack of transparency or understanding could lead to what Frank Pasquale terms a “Black Box Society” where algorithms define the trajectories of daily existence unless “the values and prerogatives of the encoded rules hidden within black boxes” are challenged.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Ex-&lt;i&gt;post facto&lt;/i&gt; assessment is often insufficient for arriving at genuine accountability. For example, the success of predictive policing in the US was drawn from the fact that police have indeed found more crimes in areas deemed “high risk”. But this assessment does not account for the fact that this is a product of a vicious cycle through which more crime is detected in an area simply because more policemen are deployed. Here, the National Strategy rightly identifies that simply opening up code may not deconstruct the black box as not all stakeholders impacted by AI solutions may understand the code. The constant aim should be explicability which means the human developer should be able to explain how certain factors may be used to arrive at a certain cluster of outcomes in a given set of situations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The requirement of accountability stems from the Right to Life provision under Article 21. As stated in the seven-judge bench in &lt;i&gt;Maneka Gandhi vs. Union of India&lt;/i&gt;, any procedure established by law must be seen to be “fair, just and reasonable” and not “fanciful, oppressive or arbitrary.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Right to Privacy was recognised as a fundamental right by the nine-judge bench in &lt;i&gt;K.S. Puttaswamy (Retd.) vs. Union of India&lt;/i&gt;. Mass surveillance can lead to the alteration of behavioural patterns which may in turn be used for the suppression of dissent by the State. Pulling vast tracts of data on all suspected criminals—as in facial recognition systems like PAIS—create a “presumption of criminality” that can have a chilling effect on democratic values.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore, any use, particularly by law enforcement would need to satisfy the requirements for infringing on the right to privacy: the existence of a law, necessity—a clearly defined state objective—and proportionality between the state object and the means used restricting fundamental rights the least. Along with centralised policy instruments such as the National Strategy, all initiatives taken in pursuance of India’s AI agenda must pay heed to the democratic virtues of privacy and free speech and their interlinkages.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India needs a law to regulate the impact of Artificial Intelligence and enable its development without restricting fundamental rights. However, regulation should not adopt a “one-size-fits-all” approach that views all uses with the same level of rigidity. Regulatory intervention should be based on questions around power asymmetries and the likelihood of the use case adversely affronting human dignity captured by India’s constitutional ethos.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The High Level Task Force on Artificial Intelligence (AI HLEG) set up by the European Commission in June 2018 published a report on “Ethical Guidelines for Trustworthy AI” earlier this year. They feature seven core requirements which include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. While the principles are comprehensive, this document stops short of referencing any domestic or international constitutional law that helps cement these values. The Indian Constitution can help define and concretise each of these principles and could be used as a vehicle to foster genuine social inclusion and mitigation of structural injustice through AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At the centre of the vision must be the inherent rights of the individual. The constitutional moment for data driven decision-making emerges therefore when we conceptualise a way through which AI can be utilised to preserve and improve the enforcement of rights while also ensuring that data does not become a further avenue for exploitation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;National vision transcends the boundaries of policy and to misuse Peter Drucker, “eats strategy for breakfast”. As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual, particularly the vulnerable in society. While the multiple policy instruments and the National Strategy are important cogs in the wheel, the long-term vision can only be framed by how the plethora of actors, interest groups and stakeholders engage with the notion of an AI-powered Indian society.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision'&gt;https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T13:55:59Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art">
    <title>AI Opera- AI as a total work of art</title>
    <link>https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</link>
    <description>
        &lt;b&gt;On October 11, 2019,  Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, &lt;a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&amp;amp;event_id=21670394"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'&gt;https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T14:30:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules">
    <title>Finding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules </title>
    <link>https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules</link>
    <description>
        &lt;b&gt;On the 25th of February this year The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new Rules broaden the scope of which entities can be considered as intermediaries to now include curated-content platforms (Netflix) as well as digital news publications. This blogpost analyzes the rule on automated filtering, in the context of the growing use of automated content moderation. 
&lt;/b&gt;
        
&lt;p class="p1"&gt;&lt;span class="s1"&gt;This article first &lt;a class="external-link" href="https://www.law.kuleuven.be/citip/blog/finding-needles-in-haystacks/"&gt;appeared&lt;/a&gt; on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;----&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Mathew Sag in his 2018 &lt;a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?article=4761&amp;amp;context=ndlr"&gt;&lt;span class="s2"&gt;paper&lt;/span&gt;&lt;/a&gt; on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “&lt;em&gt;content creator&lt;/em&gt;” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;Who is an Intermediary?&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;In India the definition of “&lt;em&gt;intermediary&lt;/em&gt;” is found under Section 2(w) of the &lt;a href="https://www.meity.gov.in/writereaddata/files/itbill2000.pdf"&gt;&lt;span class="s2"&gt;Information Technology (IT) Act 2000&lt;/span&gt;&lt;/a&gt;, which defines an Intermediary as &lt;em&gt;“with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”.&lt;/em&gt; The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been&amp;nbsp; published periodically (&lt;a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"&gt;&lt;span class="s2"&gt;2011&lt;/span&gt;&lt;/a&gt;, &lt;a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"&gt;&lt;span class="s2"&gt;2018&lt;/span&gt;&lt;/a&gt; and &lt;a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"&gt;&lt;span class="s2"&gt;2021&lt;/span&gt;&lt;/a&gt;). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in&lt;a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"&gt;&lt;span class="s2"&gt; 2011&lt;/span&gt;&lt;/a&gt;, followed by the introduction of draft amendments to the law in&lt;a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"&gt;&lt;span class="s2"&gt; 2018&lt;/span&gt;&lt;/a&gt; and finally the latest &lt;a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"&gt;&lt;span class="s2"&gt;2021 &lt;/span&gt;&lt;/a&gt;version, which would supersede the earlier Rules of 2011.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;The Growing Use of Automated Content Moderation&amp;nbsp;&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “&lt;em&gt;The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content&lt;/em&gt;”.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “&lt;em&gt;appropriate mechanisms&lt;/em&gt;” and with “&lt;em&gt;appropriate controls&lt;/em&gt;”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;The lack of clear guidelines and list of content that can be removed had&amp;nbsp; left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the &lt;a href="https://cis-india.org/internet-governance/chilling-effects-on-free-expression-on-internet"&gt;&lt;span class="s2"&gt;Rules&lt;/span&gt;&lt;/a&gt;, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined&amp;nbsp; with automated filtering could have resulted in a number of &lt;a href="https://cis-india.org/internet-governance/how-india-censors-the-web-websci#:~:text=One%2520of%2520the%2520primary%2520ways,certain%2520websites%2520for%2520its%2520users."&gt;&lt;span class="s2"&gt;unwarranted take downs&lt;/span&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content&lt;a href="https://www.medianama.com/2020/03/223-facebook-content-moderation-coronavirus-medianamas-take/"&gt;&lt;span class="s2"&gt; moderation&lt;/span&gt;&lt;/a&gt;. Though the use of automated content removal seems like the right step considering the &lt;a href="https://www.businessinsider.in/tech/news/facebook-content-moderator-who-quit-reportedly-wrote-a-blistering-letter-citing-stress-induced-insomnia-among-other-trauma/articleshow/82075608.cms"&gt;&lt;span class="s2"&gt;trauma &lt;/span&gt;&lt;/a&gt;that human moderators had to go through,&amp;nbsp; the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19&amp;nbsp; wave, the Ministry of Electronics and Information Technology has &lt;a href="https://www.thehindu.com/news/national/govt-asks-social-media-platforms-to-remove-100-covid-19-related-posts/article34406733.ece"&gt;&lt;span class="s2"&gt;asked &lt;/span&gt;&lt;/a&gt;social media platforms to remove “&lt;em&gt;unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols&lt;/em&gt;”.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;&lt;strong&gt;The New IL Rules - A ray of hope?&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s3"&gt;The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “&lt;/span&gt;&lt;span class="s1"&gt;significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “&lt;em&gt;intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”&lt;/em&gt; .The Rules also go a step further to create another type of intermediary, the&amp;nbsp; significant social media intermediary. A significant social media intermediary is defined as one “&lt;em&gt;having a number of registered users in India above such threshold as notified by the Central Government&lt;/em&gt;''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s4"&gt;Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be &lt;/span&gt;&lt;span class="s1"&gt;proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “&lt;em&gt;appropriate human oversight&lt;/em&gt;” as well as a periodic review of the tools used for content moderation. The Rules by using the term “&lt;em&gt;shall endeavor&lt;/em&gt;” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means&amp;nbsp; that the requirement is now on a best effort basis, as opposed to the word “&lt;em&gt;shall&lt;/em&gt;” in the 2018 version of the Rules, which made it mandatory.&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p1"&gt;&lt;span class="s1"&gt;Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “&lt;em&gt;proportionate&lt;/em&gt;”, “&lt;em&gt;having regard to free speech&lt;/em&gt;” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified&amp;nbsp; as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content &lt;a href="https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online"&gt;&lt;span class="s2"&gt;online.&lt;/span&gt;&lt;span class="s5"&gt; With the use of proactive filtering through automated means the content can be removed almost immediately.&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;span class="s6"&gt; &lt;/span&gt;&lt;span class="s1"&gt;With India’s current trend of new internet users, some of these creators would also be &lt;a href="https://timesofindia.indiatimes.com/business/india-business/for-the-first-time-india-has-more-rural-net-users-than-urban/articleshow/75566025.cms"&gt;&lt;span class="s2"&gt;first time users&lt;/span&gt;&lt;/a&gt; of the internet.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;The need for automated removal of content is understandable, based not only on&amp;nbsp; the sheer volume of content but also&amp;nbsp; the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p2"&gt;&lt;span class="s1"&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p class="p3"&gt;&lt;span class="s1"&gt;In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules'&gt;https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Torsha Sarkar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-08-03T07:28:53Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall">
    <title> Comments on NITI AAYOG Working Document: Towards Responsible #AIforAll</title>
    <link>https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall</link>
    <description>
        &lt;b&gt;The NITI Aayog Working Document on Responsible AI for All released on 21st July 2020 serves as a significant statement of intent from NITI Aayog, acknowledging the need to ensure that any conception of “Responsible AI” must fulfill constitutional responsibilities, incorporated through workable principles. However, as it is a draft document for discussion, it is important to highlight next steps for research and policy levers to build upon this report.&lt;/b&gt;
        
&lt;div&gt;&amp;nbsp;&lt;/div&gt;
&lt;div&gt;Read our comments in their entirety &lt;a href="https://cis-india.org/internet-governance/comments-to-aiforall-pdf" class="internal-link" title="Comments to AIForAll pdf"&gt;here&lt;/a&gt;.&lt;/div&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall'&gt;https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas, Arindrajit Basu and Ambika Tandon</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>internet governance</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-18T06:25:18Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data">
    <title>The Wolf in Sheep's Clothing: Demanding your Data</title>
    <link>https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data</link>
    <description>
        &lt;b&gt;The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence (AI) and Machine Learning (ML) has given rise to transformational business models across several sectors.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This piece was originally published in &lt;a class="external-link" href="https://telecom.economictimes.indiatimes.com/tele-talk/the-wolf-in-sheep-s-clothing-demanding-your-data/4497"&gt;The Economic Times Telecom&lt;/a&gt;, on 8 September, 2020.&lt;span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"&gt;&lt;/span&gt;&lt;/p&gt;
&amp;nbsp;
&lt;p&gt;The increasing digitalization of the economy and ubiquity of the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/internet"&gt;Internet&lt;/a&gt;, coupled with developments in &lt;a href="https://telecom.economictimes.indiatimes.com/tag/artificial+intelligence"&gt;Artificial Intelligence&lt;/a&gt;
 (AI) and Machine Learning (ML) has given rise to transformational 
business models across several sectors. These developments have changed 
the very structure of existing sectors, with a few dominant firms 
straddling across many sectors. The position of these firms is 
entrenched due to the large amounts of data they have, and usage of 
sophisticated algorithms that deliver very targeted service/content and 
their global nature.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Such data based network businesses 
are generally multi-sided platforms subject to network effects and 
winner takes all phenomena, often, making traditional competition 
regulation inappropriate. In addition, there has been concern that such 
companies hurt competition as they are owners of large amounts of data 
collected globally, the very basis on which new services are predicated.
 Also since users have an inertia to share their data on multiple 
platforms, new companies find it very challenging to emerge. Several of 
the large companies are of US origin. Several regions/countries such as 
EU, UK, India are concerned that while these companies benefit from the 
data of their citizens or their &lt;a href="https://telecom.economictimes.indiatimes.com/tag/devices"&gt;devices&lt;/a&gt;,
 SMEs and other companies in their own countries find it increasingly 
difficult to remain viable or achieve scale. With the objective of 
supporting enterprises, including SMEs in their own countries, Europe, 
UK India are in different stages of data regulation initiatives.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;In India, the &lt;a href="https://telecom.economictimes.indiatimes.com/tag/personal+data+protection"&gt;Personal Data Protection&lt;/a&gt;
 (PDP) Bill, 2019 deals with the framework for collecting, managing and 
transferring of Personal Data of Indian citizens, including mandating 
sharing of anonymized data of individuals and non-personal data for 
better targeting of services or policy making. In addition, the Report 
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up 
with a Framework for Regulating NPD. Since the NPD Report is a more 
recent phenomenon, this articles analyzes some aspects of it.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;According
 to CoE, non-personal data could be of two types. First, data or 
information which was never about an individual (e.g. weather data). 
Second, data or information that once was related to an individual (e.g.
 mobile number) but has now ceased to be identifiable due to the removal
 of certain identifiers through the process of ‘anonymisation’. However,
 it may be possible to recover the personal data from such anonymized 
data and therefore, the distinction between personal and non-personal is
 not clean. In any case, the PDP bill 2019 deals with personal data. If 
the CoE felt that some aspect of personal data (including anonymized 
data) were not adequately dealt with, it should work to strengthen it. 
The current approach of the CoE is bound to create confusion and 
overlapping jurisdiction. Since anonymized data is required to be 
shared, there are disincentives to anonymization, causing greater risk 
to individual privacy.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;A new class of business based on a “&lt;em&gt;horizontal classification cutting across different industry sectors&lt;/em&gt;” is defined. This refers to any business that derives “&lt;em&gt;new or additional economic value from data, by collecting, storing, processing, and managing data&lt;/em&gt;”
 based on a certain threshold of data collected/processed that will be 
defined by the regulatory authority that is outlined in the report. The 
CoE also recommends that “&lt;em&gt;Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data&lt;/em&gt;” without any remuneration. Further, “&lt;em&gt;By
 looking at the meta-data, potential users may identify opportunities 
for combining data from multiple Data Businesses and/or governments to 
develop innovative solutions, products and services. Subsequently, data 
requests may be made for the detailed underlying data&lt;/em&gt;”.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;With
 increasing digitalization, today almost every business is a data 
business. The problem in such categorization will be with the definition
 of thresholds. It is likely that even a small video sharing app or an 
AR/VR app would store/collect/process/transmit more data than say a 
mid-sized bank in terms of data volumes. Further, with increasing 
embedding of &lt;a href="https://telecom.economictimes.indiatimes.com/tag/iot"&gt;IoT&lt;/a&gt;
 in various aspects of our lives and businesses (smart manufacturing, 
logistics, banking etc), the amount of data that is captured by even 
small entities can be huge.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;The private sector, driven by
 profitability, identifies innovative business models, risks capital and
 finds unique ways of capturing and melding different data sets. In 
order to sustain economic growth, such innovation is necessary. The 
private sector would also like legal protection over these aspects of 
its businesses, including the unique IPR that may be embedded in the 
processing of data or its business processes. But mandating such onerous
 requirements on sharing by the CoE is going to kill any private 
initiative. Any regulatory regime must balance between the need to 
provide a secure environment for protecting data of incumbents and 
making it available to SMEs/businesses.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Meta data 
provides insights to the company’s databases and processes. These are 
source of competitive advantage for any company. Meta data is not 
without a context. The basis of demanding such disclosure is mandated 
with the proposed NPD Regulator who would evaluate such a purpose. In 
practice, purposes are open to interpretation and the structure of 
appeal mechanism etc is going to stall any such sharing. Would such 
mandates of sharing not interfere with the existing Intellectual 
Property Rights? Or the freedom to contract? Any innovation could easily
 be made available to a competitor that front-ends itself with a 
start-up. To mandate making such data available would not be fair. 
Further, how would the NPD regulator even ensure that such data is used 
for the purpose (which the proposed regulator is supposed to evaluate) 
that it is sought for? In Europe, where such &lt;a href="https://telecom.economictimes.indiatimes.com/tag/data+sharing"&gt;data sharing&lt;/a&gt;
 mandates are being considered, the focus is on public data. For private
 entities, the sharing is largely based on voluntary contributions. 
Compulsory sharing is mandated only under restricted situations where 
market failure situations are not addressed through Competition Act and 
provided legitimate interest of the data holder and existing legal 
provisions are taken into account.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further, the 
compliance requirements for such Data Businesses is very onerous and 
makes a mockery of “minimum government” framework of the government. The
 CoE recommends that all Data Businesses, whether government NGO, or 
private “&lt;em&gt;to disclose data elements collected, stored and processed, and data-based services offered&lt;/em&gt;”. As if this was not enough, the CoE further recommends that “&lt;em&gt;Every
 Data Business must declare what they do and what data they collect, 
process and use, in which manner, and for what purposes (like disclosure
 of data elements collected, where data is stored, standards adopted to 
store and secure data, nature of data processing and data services 
provided). This is similar to disclosures required by pharma industry 
and in food products&lt;/em&gt;”. Such disclosures are necessary in these 
industries as the companies in this sector deal with critical aspects of
 human life. But are such requirements necessary for all activities and 
businesses? As long as organizations collect and process data, in a 
legal manner, within the sectoral regulation, why should such 
information have to be “reported”? Further, such bureaucratic processes 
and reporting requirements are only going to be a burden to existing 
legitimate businesses and give rise to a thriving regulatory license 
raj.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Further questions that arise are: How is any 
compliance agency going to make sure that all the underlying metadata is
 made available in a timely manner? As companies respond to a dynamic 
environment, their analysis and analytical tools change and so does the 
metadata. This inherent aspect of businesses raises the question: At 
what point in time should companies make their meta-data available? How 
will the compliance be monitored?&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Conclusion: The CoE 
needs to create an enabling and facilitating an environment for data 
sharing. The incentives for different types of entities to participate 
and contribute must be recognized. Adequate provisions for risks and 
liabilities arising out data sharing need to be thought through. 
National initiatives on data sharing should not create an onerous 
reporting regime, as envisaged by the CoE, even if digital.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p class="article-disclaimer"&gt;&lt;em&gt;DISCLAIMER:
 The views expressed are solely of the author and ETTelecom.com does not
 necessarily subscribe to it. ETTelecom.com shall not be responsible for
 any damage caused to any person/organisation directly or indirectly.&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data'&gt;https://cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Rekha Jain</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Data Protection</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-11-10T17:44:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy">
    <title>NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy</title>
    <link>https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy</link>
    <description>
        &lt;b&gt;The National Strategy for Artificial Intelligence — a discussion paper on India’s path forward in AI, is a welcome step towards a comprehensive document that reflects the government's AI ambitions. The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/niti-aayog-discussion-paper"&gt;&lt;strong&gt;Download the Report&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India.&lt;/span&gt;&lt;span&gt; It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market.&lt;/span&gt;&lt;span&gt; This blog is a follow up to the comments made by CIS on Twitter&lt;/span&gt;&lt;span&gt; on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Identified Focus Areas for AI Intervention&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India.&lt;/span&gt;&lt;span&gt; Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on  manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts &lt;/span&gt;&lt;span&gt;on Lethal Autonomous Weapons Systems (LAWS) in 2018.&lt;/span&gt;&lt;span&gt; Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as  aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. &lt;/span&gt;&lt;span&gt;This should have been a focus point in the paper as it  aims to identify applications with maximum social impact and inclusion.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In its vision for the use of AI in smart cities, the&lt;/span&gt;&lt;span&gt; paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety.&lt;/span&gt;&lt;span&gt; This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI.&lt;/span&gt;&lt;span&gt; Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Harnessing the Power of AI and Accelerating Research&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised.&lt;/span&gt;&lt;span&gt; In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of  COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence).&lt;/span&gt;&lt;span&gt; However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI.&lt;/span&gt;&lt;span&gt; Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component.&lt;/span&gt;&lt;span&gt; The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes&lt;/span&gt;&lt;span&gt; would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Ethics, Privacy, Security and Safety&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force.&lt;/span&gt;&lt;span&gt; Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Transparency&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The section on transparency and opening the Black Box has several lacunae.&lt;/span&gt;&lt;span&gt; First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act,&lt;/span&gt;&lt;span&gt; which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI&lt;/span&gt;&lt;span&gt; alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes)  vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation.&lt;/span&gt;&lt;span&gt; It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Fairness&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes.&lt;/span&gt;&lt;span&gt; However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Self-certification&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Certification by a self-regulatory body&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Discrimination impact assessments&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Investigations by the privacy regulator &lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;Such tools will proactively need to ensure&lt;/span&gt;&lt;span&gt; inclusion, diversity, and equity in composition and decisions.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data  , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google&lt;/span&gt;&lt;span&gt; are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Privacy&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data  (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy).&lt;/span&gt;&lt;span&gt; In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy.  Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee.&lt;/span&gt;&lt;span&gt; However CIS has pointed out that these principles are generic and not specific to data protection.&lt;/span&gt;&lt;span&gt; Moreover, the law chapter of IEEE’s ‘&lt;/span&gt;&lt;em&gt;&lt;span&gt;Global Initiative on Ethics of Autonomous and Intelligent Systems’&lt;/span&gt;&lt;/em&gt;&lt;span&gt; has been ignored in favor of the chapter on ‘&lt;/span&gt;&lt;em&gt;&lt;span&gt;Personal Data and Individual Access Control in Ethically Aligned Design&lt;/span&gt;&lt;/em&gt;&lt;span&gt;’&lt;/span&gt;&lt;span&gt; as the recommended international standard.&lt;/span&gt;&lt;span&gt; Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;AI Regulation and Sectoral Standards&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector.  However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks.&lt;/span&gt;&lt;span&gt; Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner.&lt;/span&gt;&lt;span&gt; That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Security and Safety of AI Systems&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the  explainability of AI.&lt;/span&gt;&lt;span&gt; Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems.&lt;/span&gt;&lt;span&gt; Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Research and Education&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges,&lt;/span&gt;&lt;span&gt; the government could also ensure that ethics and rights are  part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Data Standards and Data Sharing&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly.&lt;/span&gt;&lt;span&gt; Yet  this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”&lt;/span&gt;&lt;span&gt;, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;An AI Garage for Emerging Economies&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The discussion paper also qualifies India as an “ideal test-bed”&lt;/span&gt;&lt;span&gt; for trying out AI related solutions. This is problematic since questions of regulation in  India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI. &lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;The blanket recommendations were provided without looking at its viability in each sector.&lt;/span&gt;&lt;span&gt; Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.&lt;/span&gt;&lt;span&gt;Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy. &lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy'&gt;https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Sunil Abraham, Elonnai Hickok, Amber Sinha, Swaraj Barooah, Shweta Mohandas, Pranav M Bidare, Swagam Dasgupta, Vishnu Ramachandran and Senthil Kumar</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-06-13T13:08:47Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age">
    <title>Ethical Data Design Practices in the AI (Artificial Intelligence) Age</title>
    <link>https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age</link>
    <description>
        &lt;b&gt;Shweta Mohandas was a panelist at discussion on Ethical Data Design Practices in the AI (Artificial Intelligence) Age, organised by Startup Grind, Bangalore on July 28, 2018 at NUMA Bangalore. &lt;/b&gt;
        &lt;h2&gt;Agenda&lt;/h2&gt;
&lt;p&gt;&lt;b&gt;Ethical Data Design Practices in the Age&lt;/b&gt;&lt;/p&gt;
&lt;p dir="ltr" style="text-align: justify; "&gt;The panel discussion is intended to explore the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.&lt;/p&gt;
&lt;p dir="ltr"&gt;Discussion centred around how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand current thinking by the AI community on ethics and morality in computing and the challenges it presents. &lt;/li&gt;
&lt;li&gt;Explore examples of the ethical choices that products make now and will make in the near future.&lt;/li&gt;
&lt;li&gt;Learn how designers might approach designing experiences that face moral dilemmas.&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age'&gt;https://cis-india.org/internet-governance/news/ethical-data-design-practices-in-the-ai-artificial-intelligence-age&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2018-08-01T23:14:21Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines">
    <title>Ethics and Human Rights Guidelines for Big Data for Development Research</title>
    <link>https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines</link>
    <description>
        &lt;b&gt;This is a four-part review of guideline documents for ethics and human rights in big data for development research. This research was produced as part of the Big Data for Development network supported by International Development Research Centre, Canada&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Part #1 - Review of Principles of Ethics in Biomedical Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/biomedicalscience" class="internal-link" title="CIS_BD4D_Guideline01_MS+AS_BiomedicalScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #2 - Review of Principles of Ethics in Computer Science: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/computerscience" class="internal-link" title="CIS_BD4D_Guideline02_RS+AS_ComputerScience PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #3 - Summary of Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/AIEthicsReview" class="internal-link" title="CIS_BD4D_Guideline03_AS+PT_BigDataAIEthicsReview_SummaryNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;h4&gt;Part #4 - Extended Review of Codes of Ethics for Big Data and AI: &lt;a href="https://cis-india.org/raw/bd4d-guideline-documents/ExtendedNotes" class="internal-link" title="CIS_BD4D_Guideline04_PT+PB_BigDataAIEthicsReview_ExtendedNotes PDF"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;The rapid expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “big data”; though there is no single agreed upon definition of the term. Big data promises to provide new insights and solutions across a wide range of sectors. Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. The predecessor disciplines of data science such as computer sciences, applied mathematics, and statistics have traditionally managed to stay out of the scope of ethical frameworks, based on the assumption that they do not involve humans as subject of their research. While critical study into big data is still in its infancy, there is a growing belief that there are significant discontinuities between the rapid growth in big data and the ethical framework that exists to govern its use. In this set of documents, we look at them in detail.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines'&gt;https://cis-india.org/raw/bd4d-ethics-human-rights-guidelines&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amber Sinha, Manjri Singh, Rajashri Seal, Pranav Bhaskar Tiwari, Pranav M Bidare</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>BD4D</dc:subject>
    
    
        <dc:subject>RAW Research</dc:subject>
    
    
        <dc:subject>Big Data for Development</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-05-20T07:56:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad">
    <title>New intermediary guidelines: The good and the bad </title>
    <link>https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</link>
    <description>
        &lt;b&gt;In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. &lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This article originally appeared in the Down to Earth &lt;a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693"&gt;magazine&lt;/a&gt;. Reposted with permission.&lt;/p&gt;
&lt;p&gt;-------&lt;/p&gt;
&lt;p&gt;The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.&lt;/p&gt;
&lt;p&gt;These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.&lt;/p&gt;
&lt;p&gt;The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.&lt;/p&gt;
&lt;p&gt;These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.&lt;/p&gt;
&lt;p&gt;Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What changed for the good?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.&lt;/p&gt;
&lt;p&gt;Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.&lt;/p&gt;
&lt;p&gt;Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.&lt;/p&gt;
&lt;p&gt;The new rules, therefore, envisage three types of entities:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.&lt;/li&gt;&lt;li&gt;There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.&lt;/li&gt;&lt;li&gt;The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.&lt;/p&gt;
&lt;p&gt;I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.&lt;/p&gt;
&lt;p&gt;Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.&lt;/p&gt;
&lt;p&gt;One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression&lt;/li&gt;&lt;li&gt;The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.&lt;/li&gt;&lt;/ul&gt;
&lt;p&gt;The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.&lt;/p&gt;
&lt;p&gt;This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What should go?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.&lt;/p&gt;
&lt;p&gt;In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.&lt;/p&gt;
&lt;p&gt;It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.&lt;/p&gt;
&lt;p&gt;Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.&lt;/p&gt;
&lt;p&gt;Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The author would like to thank Gurshabad Grover for his editorial suggestions.&amp;nbsp;&lt;/em&gt;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'&gt;https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>TorShark</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>IT Act</dc:subject>
    
    
        <dc:subject>Intermediary Liability</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Censorship</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2021-03-15T13:52:46Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
