The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 11 to 25.
The AI Task Force Report - The first steps towards India’s AI framework
https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework
<b>The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.</b>
<p style="text-align: justify; ">The blog post was edited by Swagam Dasgupta. <a class="external-link" href="http://cis-india.org/internet-governance/files/ai-task-force-report.pdf">Download <strong>PDF</strong> here</a></p>
<hr />
<p><span style="text-align: justify; ">The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors</span><a name="_ftnref1"></a><span style="text-align: justify; "> and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver,</span><a name="_ftnref2"></a><span style="text-align: justify; "> the Report attempts to answer three policy questions:</span></p>
<ul>
<li>What are the areas where government should play a role?</li>
<li>How can AI improve quality of life and solve problems at scale for Indian citizens?</li>
<li>What are the sectors that can generate employment and growth by the use of AI technology?</li>
</ul>
<p><span style="text-align: justify; ">This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.</span></p>
<h3><span>Sectors of Relevance and Challenges</span></h3>
<p style="text-align: justify; ">In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education.<a name="_ftnref3"></a> While these ten domains are part of the 16 domains of focus listed in the AITF’s web page,<a name="_ftnref4"></a> it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.<a name="_ftnref5"></a></p>
<p style="text-align: justify; "><span>Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data.</span><a name="_ftnref6"></a><span> The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges.</span><a name="_ftnref7"></a><span> The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.</span><a name="_ftnref8"></a></p>
<p style="text-align: justify; ">Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies.<a name="_ftnref9"></a> In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.<a name="_ftnref10"></a></p>
<p style="text-align: justify; ">On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards.<a name="_ftnref11"></a> In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.</p>
<p style="text-align: justify; ">The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.<a name="_ftnref12"></a></p>
<p style="text-align: justify; ">Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived.<a name="_ftnref13"></a> Therefore, AI could have a large positive impact through inclusive growth and empowerment.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics.<a name="_ftnref14"></a> Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth.<a name="_ftnref15"></a> The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.</p>
<h3>Areas in need of Government Intervention</h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion.<a name="_ftnref16"></a> While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.</p>
<p style="text-align: justify; "><strong>Participation in International Forums</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.</p>
<h3><span>Key Enablers to AI</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI.<a name="_ftnref17"></a> Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Data and Data Marketplaces</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court,<a name="_ftnref18"></a> the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.<a name="_ftnref19"></a></p>
<p style="text-align: justify; "><span>This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it.</span><a name="_ftnref20"></a><span> It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.</span></p>
<p style="text-align: justify; "><span><strong>Innovation and Patents</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes.<a name="_ftnref21"></a> If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted.<a name="_ftnref22"></a> Given the controversies over CRIs that have dotted the Indian patent landscape<a name="_ftnref23"></a>, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.</p>
<h3><span>Ethical AI framework</span></h3>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Responsible AI</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.</p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; "><strong>Privacy and Data</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Report also recognises the significance of the implementation of the Aadhaar Act<a name="_ftnref24"></a>, the privacy judgement<a name="_ftnref25"></a> and the proposed data protection laws<a name="_ftnref26"></a>, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data.<a name="_ftnref27"></a> Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog,<a name="_ftnref28"></a> this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI.<a name="_ftnref29"></a> Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.<a name="_ftnref30"></a></p>
<p style="text-align: justify; "><span><strong>Liability</strong></span></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.</p>
<p style="text-align: justify; "><strong>AI and Employment </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk<a name="_ftnref31"></a>, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes.<a name="_ftnref32"></a> Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.</p>
<p style="text-align: justify; "><strong>Education and Re-skilling</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component.<a name="_ftnref33"></a> Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.</p>
<p style="text-align: justify; "><strong>Policy Recommendations </strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The Task Force<sub>,</sub> in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.</p>
<p style="text-align: justify; "><strong>National Artificial Intelligence Mission</strong></p>
<p style="text-align: justify; ">The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups.<a name="_ftnref34"></a> The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment.<a name="_ftnref35"></a> Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge<a name="_ftnref36"></a> and Japan’s sandbox for self driving trucks.<a name="_ftnref37"></a> The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.<a name="_ftnref38"></a>Additionally, government engagement in AI projects has thus far been fragmented<a name="_ftnref39"></a>and a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition<a name="_ftnref40"></a> that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.</p>
<p style="text-align: justify; "><strong>Other recommendations</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.”<a name="_ftnref41"></a> Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.</p>
<p style="text-align: justify; ">The recommendations propose establishing operation standards for data storage and privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.</p>
<p style="text-align: justify; ">Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.</p>
<p style="text-align: justify; ">Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’<a name="_ftnref42"></a> needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.</p>
<p style="text-align: justify; "><strong>Conclusion</strong></p>
<p style="text-align: justify; "><strong> </strong></p>
<p style="text-align: justify; ">The inconspicuous release of the Report as well as the lack of a call for public comments<a name="_ftnref43"></a> results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.</p>
<p style="text-align: justify; "><span>Compared to the recent Niti Aayog Discussion Paper</span><a name="_ftnref44"></a><span>, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.</span></p>
<p> </p>
<p style="text-align: justify; "><a name="_ftn1"></a><span> The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. </span><a href="https://www.aitf.org.in/">https://www.aitf.org.in//</a><span> There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women</span></p>
<p style="text-align: justify; "><a name="_ftn2"></a> The Report on the Artificial Intelligence Task Force, Pg. 1,<span>http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf</span></p>
<p style="text-align: justify; "><a name="_ftn3"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn4"></a> The Artificial Intelligence Task Force https://www.aitf.org.in/</p>
<p style="text-align: justify; "><a name="_ftn5"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn6"></a> The Report on the Artificial Intelligence Task Force, Pg. 9,10.</p>
<p style="text-align: justify; "><a name="_ftn7"></a> The Report on the Artificial Intelligence Task Force, Pg. 9</p>
<p style="text-align: justify; "><a name="_ftn8"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn9"></a> Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report</p>
<p style="text-align: justify; "><a name="_ftn10"></a>Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report _02.pdf</p>
<p style="text-align: justify; "><a name="_ftn11"></a> The Report on the Artificial Intelligence Task Force, Pg. 21.</p>
<p style="text-align: justify; "><a name="_ftn12"></a> Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission</p>
<p style="text-align: justify; "><a name="_ftn13"></a> The Report on the Artificial Intelligence Task Force, Pg. 22</p>
<p style="text-align: justify; "><a name="_ftn14"></a> Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018</p>
<p style="text-align: justify; "><a name="_ftn15"></a> Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/</p>
<p style="text-align: justify; "><a name="_ftn16"></a>It is not just technical experts that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.</p>
<p style="text-align: justify; "><a name="_ftn17"></a> The Report on the Artificial Intelligence Task Force, Pg. 31</p>
<p style="text-align: justify; "><a name="_ftn18"></a>Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece</p>
<p style="text-align: justify; "><a name="_ftn19"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn20"></a> CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf</p>
<p style="text-align: justify; "><a name="_ftn21"></a> The Report on the Artificial Intelligence Task Force, Pg. 30</p>
<p style="text-align: justify; "><a name="_ftn22"></a> Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.</p>
<p style="text-align: justify; "><a name="_ftn23"></a>Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement</p>
<p style="text-align: justify; ">https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html</p>
<p style="text-align: justify; "><a name="_ftn24"></a> The Report on the Artificial Intelligence Task Force, Pg. 37</p>
<p style="text-align: justify; "><a name="_ftn25"></a>The Report on the Artificial Intelligence Task Force, Pg. 7</p>
<p style="text-align: justify; "><a name="_ftn26"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn27"></a> The Report on the Artificial Intelligence Task Force, Pg. 8</p>
<p style="text-align: justify; "><a name="_ftn28"></a> National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p style="text-align: justify; "><a name="_ftn29"></a> Meaningful information and the right to explanation,Andrew D Selbst Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242</p>
<p style="text-align: justify; "><a name="_ftn30"></a> The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data</p>
<p style="text-align: justify; "><a name="_ftn31"></a> M-Turk https://www.mturk.com/</p>
<p style="text-align: justify; "><a name="_ftn32"></a> For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/</p>
<p style="text-align: justify; "><a name="_ftn33"></a> The Report on the Artificial Intelligence Task Force, Pg. 41</p>
<p style="text-align: justify; "><a name="_ftn34"></a> Report of Artificial Intelligence Task Force Pg, 46, 47</p>
<p style="text-align: justify; "><a name="_ftn35"></a> ibid.</p>
<p style="text-align: justify; "><a name="_ftn36"></a>The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge</p>
<p style="text-align: justify; "><a name="_ftn37"></a>Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/</p>
<p style="text-align: justify; "><a name="_ftn38"></a> Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.</p>
<p style="text-align: justify; "><a name="_ftn39"></a><a href="https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977">https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977</a>,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/</p>
<p style="text-align: justify; "><a name="_ftn40"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn41"></a> Report of Artificial Intelligence Task Force Pg. 49</p>
<p style="text-align: justify; "><a name="_ftn42"></a> The Report on the Artificial Intelligence Task Force, Pg. 47</p>
<p style="text-align: justify; "><a name="_ftn43"></a> The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.</p>
<p style="text-align: justify; "><a name="_ftn44"></a>National Strategy for Artificial Intelligence: <a href="http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf">http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework'>https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework</a>
</p>
No publisherElonnai Hickok, Shweta Mohandas and Swaraj Paul BarooahInternet GovernanceArtificial IntelligencePrivacy2018-06-27T14:32:56ZBlog EntryTechnology Foresight Group Tandem Research's AI policy lab on the theme AI and Environment
https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment
<b>Shweta Mohandas attended a roundtable discussion on artificial intelligence and environment held at Tandem Research's office in Goa on October 5, 2018. She also made the framing intervention for the first session by addressing the question - What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?</b>
<dl style="text-align: justify; ">
<p>Conversations at the lab clustered around four main themes:</p>
<p><b>AI in the Anthropocene</b><br />What are the most critical sustainability challenges in India – and can AI be useful in addressing them? What are the likely ethical conundrums, and plausible unintended consequences of the use of AI for sustainability?<br /><br /><b>Conservation after nature</b><br />What AI interventions are possible to foster better conservation and can AI driven citizen science initiatives improve people’s relationship with the natural world? Can AI help imagine a more dynamic and proximate co-existence with other species, after nature?<br /><br /><b>Water ecosystems</b><br />Can AI help us imagine new paradigm of water control and infrastructure that are more dynamic and ‘mirror’ the complexity of natural water systems? Will AI lead to decentralization and empowerment of water users or will it result in centralized models and loss of power and agency of water users?<br /><br /><b>Future Cities</b><br />Can AI systems be used to foster sustainability practices around mobility, energy, waste, and help better plan development zones and create early warning systems? What systems can be built to encourage citizen participation for solving sustainability problems and increase transparency and accountability of municipal governments?</p>
</dl>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment'>https://cis-india.org/internet-governance/news/technology-foresight-group-tandem-researchs-ai-policy-lab-on-the-theme-ai-and-environment</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-10-31T01:10:34ZNews ItemTalks at National University of Juridical Sciences Today
https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today
<b>Arindrajit Basu delivered two lectures at the National University of Juridical Sciences on September 18, 2019. </b>
<p style="text-align: justify; ">The first one was part of a symposium being conducted by the soon to be set up Intellectual Property and Technology Law Centre. I spoke on "Conceptualising India's Digital Policy Vision" The other speaker today was Mr. Supratim Chakraborty (Partner, Khaitan&Co.) Tomorrow's speakers are Prof. Mahendra Kumar Bhandan and Nikhil Narendran (Partner, Trilegal)</p>
<p style="text-align: justify; "><b>Abstract</b></p>
<p style="text-align: justify; ">The past year has seen vigorous activity on the domestic data governance policy front in India. Across key issues including intermediary liability, data localisation and e-commerce, the government has rolled out a patchwork of regulatory policies that has resulted in battle lines being drawn by governments, industry and civil society actors both in India and across the globe. The Data Protection Bill is set to be tabled in the next session of Parliament amidst supposed disagreement among policy-makers on key provisions, including data localization. The draft e-commerce policy and Chapter 4 of the Economic Survey refer to the concepts of ‘community data’ and ‘data as public good’ respectively. Artifiicial Intelligence is also the new buzz word among policy-making circles and industry players alike.<br /><br />The implementation of each of these concepts have important implications for individual privacy, the monetisation of data by (foreign tech companies) and the harnessing of-as the e-commerce policy puts it-India’s data for India’s development. Meanwhile, at international forums such as the G20, India has partnered up with its BRICS allies to emphasize the notion of ‘data sovereignty’ or the right of each country to govern data within its jurisdiction without external interference.<br />In his talk, Basu unpacked each of these policies and followed up with a discussion on what these developments meant for Indian citizens and for India’s role in the multilateral global order.</p>
<p style="text-align: justify; ">The second one was on 'Constitutionalizing Artificial Intelligence' conducted by the Constitutional Law Society. Here, I drew from some preliminary findings from a paper I am working on with Elonnai and Amber.</p>
<p style="text-align: justify; "><b>Abstract</b></p>
<p style="text-align: justify; ">The use of big data and algorithmic decision-making has been touted world over as a means of augmenting human capacities, removing bureaucratic fetters and benefiting society. Yet, with concerns arising around bias, fairness and a lack of algorithmic accountability, an entirely new domain of discourse on data justice has emerged - underscoring the idea that algorithms not only have the potential to exacerbate entrenched structural inequality but could also create and modulate new forms of injustice for the vulnerable sections of society.</p>
<p style="text-align: justify; "><span>There is a need for a reflexive turn in the debate on data justice that adequately considers the broader narrative and entrenched inequality in the ecosystem. </span><span>Transformative constitutionalism is a new brand of scholarship in comparative constitutional law which celebrates the crucial role of the state and the judiciary in bringing about emancipatory change and rooting out structural inequality.</span></p>
<p style="text-align: justify; ">Originally conceptualized as a Global South concept designed as a counter-model to the individual rights-driven model of Northern Constitutions, scholars have now identified emancipatory provisions in several western constitutions such as Germany. India’s constitution is one such example. The origins of constitutional order in India were designed to “bring the alien and powerful machine like that of the state under the control of human will” and to eliminate the inequality of “status, facilities and opportunities.” <br /><br />What is the relevance of India's constitutional ethos in the regulation of modern day data driven decision-making? How can policy-makers use constitutional tenets to mitigate structural injustice and transform the bearings of 21st century Indian society?</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today'>https://cis-india.org/internet-governance/news/talks-at-national-university-of-juridical-sciences-today</a>
</p>
No publisherAdminIndustry 4.0Internet GovernanceArtificial Intelligence2019-09-20T14:45:35ZNews ItemSpeculative Futures Lab on Artificial Intelligence in Media, Entertainment, and Gaming
https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming
<b>Pranav Manjesh Bidare attended the event organised by Quicksand between November 16 and 18, 2018 in Bangalore as a panelist.</b>
<p style="text-align: justify; ">Pranav was a panelist in the session discussing "Ethics of AI in the Creative spaces" on November 17, alongside Urvashi Aneja, and Abishek Reddy from Tandem Research. For more info <a class="external-link" href="http://cis-india.org/internet-governance/files/Quicksand%20AI%20Futures%20Lab.pdf">see this</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming'>https://cis-india.org/internet-governance/news/speculative-futures-lab-on-artificial-intelligence-in-media-entertainment-and-gaming</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2018-12-05T03:12:58ZNews ItemSociety 5.0 and Artificial Intelligence with a Human Face
https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face
<b>On 10 May 2019 Radhika Radhakrishnan attended a stakeholder's roundtable consultation on "Society 5.0 and Artificial Intelligence with a Human Face", organized by the Indian Council for Research on International Economic Relations (ICRIER) at India Habitat Centre, New Delhi. The event aimed to chart a roadmap for India’s participation at the G20, under the Japanese Presidency.</b>
<p style="text-align: justify; ">The agenda can be <a class="external-link" href="http://icrier.org/newsevents/seminar-details/?sid=460">found here</a>. Radhika's inputs were primarily focused on the feminist and gender implications of publicly deployed AI models, challenges and opportunities for academic AI-focused research in the Global South, recommendations for AI capacity building and skilling in the Global South, and regulation of black-box AI.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face'>https://cis-india.org/internet-governance/news/society-5-0-and-artificial-intelligence-with-a-human-face</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-05-14T14:51:56ZNews ItemRoundtable on Consumer Experiences with New Technologies in APAC (Singapore)
https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore
<b>Arindrajit Basu was invited to a Roundtable on Artificial Intelligence:Consumer Experiences with New Technologies (APAC region). </b>
<p style="text-align: justify; ">The event <span>was hosted by Consumer International and delivered at Google, Singapore on March 26, 2019. CIS research and Arindrajit's inputs have been quoted in a report by the same name which will be released by Consumer International within the course of the next month.</span></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore'>https://cis-india.org/internet-governance/news/roundtable-on-consumer-experiences-with-new-technologies-in-apac-singapore</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-04-15T10:25:57ZNews ItemRoundtable on Artificial Intelligence & Healthcare
https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare
<b>Centre for Internet & Society (CIS) is organizing a roundtable on artificial intelligence (AI) and healthcare at 'The Energy and Resources Institute' (TERI) in Bengaluru on November 30, 2017 from 2 p.m. to 5 p.m. The roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies in the Indian healthcare sector.</b>
<p style="text-align: justify; ">The Indian healthcare industry, powered by Artificial Intelligence, is moving into a new era of increased innovation and independence. With multiple new healthcare start-ups and large ICT companies such as Microsoft, IBM, and Google offering AI solutions to healthcare challenges in the country, it is evident that AI is attempting to enhance the accessibility, affordability, quality and awareness of healthcare in India. Major target areas sought to be enhanced by use of AI in healthcare include addressing the uneven ratio of skilled doctors to patients and making doctors more efficient at their jobs, delivery of personalized and high-quality healthcare to rural areas, and training doctors and nurses in complex procedures.</p>
<p style="text-align: justify; ">Through the application of machine learning, data mining, natural language processing (NLP), and advanced analytics, AI can help doctors in speedy diagnosis of diseases. AI is also mobilised as ‘smart advisors’ or virtual humans who are capable of making informed decisions by better comprehending data and information through sensing interfaces and analytics, in various forms.</p>
<p style="text-align: justify; ">Some of these forms include ‘customer service agents’ that can expedite simple tasks like appointment scheduling, or more complex decisions like selecting health plan benefits, ‘clinicians’ that can help with primary screening in understaffed rural areas possibly substituting for human labour, and ‘cognitive agents’ that can efficiently manage existing clinical knowledge alongside physicians, nurses and researchers, thereby reducing the cognitive load on humans. AI based Indian healthcare start-ups such as SigTuple, Aindra, Ten3T, Touchkin and many others are offering a range of solutions including automation of medical diagnosis, automated analysis of medical tests, detection and screening of diseases, wearable sensor based medical devices and monitoring equipment, patient management systems, predictive healthcare diagnosis and disease prevention.</p>
<p style="text-align: justify; ">However, AI in healthcare raises many potential concerns, a common one being the lack of comprehensive, representative, interoperable, and clean data - a challenge that is beginning to be addressed through the Electronic Health Records Standards developed by the Ministry of Health and Family Welfare in 2016 by the Ministry of Health and Family Welfare. Other major challenges include patient adoption and the need for personal interaction with doctors, concerns over mass-scale job losses, distrust in technology, and ethical concerns.</p>
<p style="text-align: justify; ">It is imperative to note that implementing AI in healthcare, which is bound to disrupt it, does not imply replacing doctors but augmenting their efforts to create a more efficient healthcare landscape in the country. A harmonious collaboration of man and machine is expected to bring about a meaningful and long-lasting impact and stakeholders should be prepared to adapt to this change and the challenges that come with it.</p>
<hr />
<h3 style="text-align: justify; ">Roundtable Agenda</h3>
<p dir="ltr"><span>Thursday, November 30, 2017, 2:00pm - 5:00pm </span></p>
<p dir="ltr"><span>2:00 - 2:30: Introduction and setting the scene </span></p>
<p dir="ltr"><span>2:30 - 3:30: Discussion on the AI landscape in health in India: </span></p>
<ul>
</ul>
<ul>
<li><span>Manner and extent of integration of AI into products/services of healthcare companies.</span><span></span></li>
<li><span>Relevant stakeholders and their roles in implementing AI into products/services of healthcare companies.</span><span></span></li>
<li><span>Future of AI and related technologies in the healthcare sector</span><span></span></li>
</ul>
<ul>
</ul>
<p dir="ltr" style="text-align: justify; "><span>3:30 - 4:30: Discussion on challenges and solutions towards regulating AI in India: </span></p>
<ul>
<li dir="ltr" style="list-style-type:disc; "><span>Challenges faced in the conception and implementation of the AI product/service, and reasons for such challenges.</span><span></span></li>
<li dir="ltr" style="list-style-type:disc; "><span>Regulatory provisions for implementation of AI in healthcare products/services under the existing laws, and need for reforms.</span><span></span></li>
<li dir="ltr" style="list-style-type:disc; "><span>Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions. </span></li>
</ul>
<hr />
<p><a class="external-link" href="http://cis-india.org/internet-governance/files/a-i-and-manufacturing-and-services">Click to download the invite</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare'>https://cis-india.org/internet-governance/events/roundtable-on-artificial-intelligence-and-healthcare</a>
</p>
No publisherAdminEventArtificial IntelligenceHealthcare2018-01-02T13:49:14ZEventRoundtable on AI and Finance in India
https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india
<b>Centre for Internet & Society (CIS) will hold a roundtable on artificial intelligence and finance in India on Wednesday, February 7, 2018 in association with HasGeek and the 50p Conference. The roundtable will take place from 2 p.m. to 5 p.m at TERI (The Energy Resources Institute) in Domlur, Bengaluru.</b>
<p style="text-align: justify; ">We invite you all to participate in this roundtable to share and build knowledge about trajectories of AI deployment across sub-sectors of banking in India and the emergent regulatory and public policy concerns.</p>
<p style="text-align: justify; ">The objective of the roundtable is to bring together various actors active across the fields of artificial intelligence, machine learning, cognitive computing, financial technologies,and big data credit scoring and online lending, to discuss pressing public policy issues in regards to the utilisation and implementation of AI in the banking and finance sectors of India.</p>
<p style="text-align: justify; ">These sectors currently find themselves at the early stages of AI adoption. Such technologies are being implemented to facilitate both front-end and back-end processes by a variety of players with the aim of improving the accessibility, customised user engagement, and quality of current financial services. Leading commercial banks in India have all been working to develop and deploy AI technologies either in house or in partnership with small and large-scale tech companies. Such initiatives have seen the deployment of numerous chatbots and humanoid robots for the purposes of customer service. More significant, however, is the use of such technology by banks and fintech actors to facilitate decision making behind the scenes, on a variety of financial issues including but not limited to credit-worthiness, fraud detection, and investments.</p>
<p style="text-align: justify; ">While these sectors are no strangers to the use of big data analytics and similar technologies in aiding with financial decision making and daily operations, the deployment of technologies such as machine learning and natural language processing is still very new. Due to the nascent nature of this phenomenon, little is known about the details of their implications for both producers and consumers. Furthermore, concerns regarding data ownership, liability, and consumer rights have all been raised in light of AI adoption. This roundtable will present us with an opportunity to discuss such issues and begin to fill this knowledge gap.</p>
<p style="text-align: justify; ">For agenda and event brochure <strong><a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-finance">click here</a>. </strong>For RSVP <a class="external-link" href="https://docs.google.com/forms/d/e/1FAIpQLSd1QFN8a5R3FPPLklDR0XQb1izzGFWzWtAilI5-UNO4EApAFQ/viewform">click here</a>. Read the <a class="external-link" href="http://cis-india.org/internet-governance/files/draft-roundtable-report-on-ai-and-banking">event report here</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india'>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-finance-in-india</a>
</p>
No publishersamanInternet GovernanceEventArtificial Intelligence2018-03-11T14:58:55ZEventRoundtable on A.I. and Manufacturing and Services
https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services
<b>The Centre for Internet and Society (CIS), Bangalore is organizing a roundtable on ‘A.I. and Manufacturing and Services’ on the 19th of January, 2018 from 2 to 5 pm at ‘The Energy and Resources Institute’ (TERI) Bangalore. The Roundtable seeks to discuss the various issues and challenges surrounding the implementation of AI and related technologies on manufacturing processes and services.</b>
<p style="text-align: justify; ">Since the Industrial Revolution machines have substituted human labour and helped industries save time and money. This was succeeded by the advent of computers and technology which helped in completing tasks with better speed and accuracy than the human brain. The emergence of machine-learning technology and artificial intelligence has now made machines capable of doing work that was earlier considered to be something that could only be done by humans. From the use of AI in understanding customer shopping trends to its use in making automobiles, AI is becoming more of a norm than an exception. The analytics of how customers shop is now helping companies forecast their manufacturing needs. The synergy of technology and machines i.e. smart manufacturing, not only changes manufacturing and shipping but also improves worker safety. Different forms of smart manufacturing are also starting to come up in India: Wipro and Infosys have launched AI platforms, and the Indian Institute of Science is developing a smart factory with support from Boeing Company and General Electric. Infosys has also released an AI platform, ‘Nia’, which is programmed to forecast revenue and understand customer behaviour.</p>
<p style="text-align: justify; ">The instances of use of machines to substitute human workforce, in some cases, has brought about a sense of worry. Recent trends in factory hiring show that jobs are being lost to automated forms of labour, further evidenced by a report from the research firm HorsesforSources, which predicts that India is set to lose 640,000 low-skilled job positions to automation by the year 2021.The IT sector in India is also under risk from the use of AI. Reports have also found that the rising unemployment in the IT sector has led to increased pressure on labour regulators.</p>
<p style="text-align: justify; ">Although there are some studies that state that the use of AI would bring about a market for people who would need to work along with AI, the FICCI and EY’s 2016 Report on the Future of jobs and its implication on Indian higher education suggests that one of the ways to combat the loss of jobs was reskilling and upskilling the labour force. India has taken the first step towards this by launching the National Skill Development Mission.</p>
<p style="text-align: justify; ">From the use of neural networks to monitor steel plants for packing and shipping groceries, the use of intelligent machines has begun disrupting traditional business models in the industry. However, these advancements raise questions around labour, ethics, liability, and machine-human cooperation. Dialogue and debate are needed to understand how AI is being used in manufacturing, the potential benefits, and challenges of the same, and a way forward that optimizes innovation and protects human rights.</p>
<h2 style="text-align: justify; ">Roundtable Agenda</h2>
<p>Friday 19th January | 2:00 p.m - 5:00 p.m.</p>
<div id="_mcePaste">2:00 - 2:30 Introduction and setting the scene</div>
<div id="_mcePaste">2:30 - 3:30 Discussion on the AI landscape in the manufacturing and services industry:</div>
<div></div>
<ul>
<li>Manner and extent of integration of AI into manufacturing and services</li>
<li>Relevant stakeholders and their roles in implementing AI in manufacturing and services</li>
<li>Future of AI and related technologies in AI in manufacturing and services </li>
<li>Impact on work and labour</li>
</ul>
<p>3:30 - 4:30 Discussion on challenges and solutions towards regulating AI in India:</p>
<ul>
<li>Challenges faced in the conception and implementation of the AI product/ service, and reasons for such challenges.</li>
<li>Regulatory provisions for implementation of AI in the manufacturing and services under the existing laws, and need for reforms.</li>
<li>Challenges posed by AI to existing policy and regulatory frameworks in the Indian as well as the global context, and possible solutions.</li>
</ul>
<p>4.30 - 5.00 Conclusion and way forward</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services'>https://cis-india.org/internet-governance/events/roundtable-on-ai-and-manufacturing-and-services</a>
</p>
No publisherAdminInternet GovernanceEventArtificial Intelligence2018-01-18T13:44:15ZEventRoundtable Discussion on “The Future of AI Policy in India” @ ICRIER
https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier
<b>Radhika Radhakrishnan, attended a Roundtable Discussion on “The Future of AI Policy in India” organized by the Indian Council for Research on International Economic Relations (ICRIER) in New Delhi on July 1, 2019, to arrive at actionable recommendations for promotion of AI in India.</b>
<p style="text-align: justify; ">Radhika's inputs primarily focused on - capacity and skilling for AI adoption in India, sectoral opportunities for the adoption of AI, regulation of explanations for AI, fairness and bias in AI models, and actionable recommendations for government priorites for AI policies in India.</p>
<h3 style="text-align: justify; ">Concept Note</h3>
<p style="text-align: justify; ">India’s Artificial Intelligence moment is truly here and now. At a time when a diverse range of applications based on AI are being developed, pushing its frontier further into uncharted realms of business and society, Indian policy makers are contemplating not just AI’s potential for growth and social transformation, but also its proclivity to create divides and inequality. Our study attempts to understand the impacts of AI and trace the pathways to realizing it.</p>
<p style="text-align: justify; ">AI’s transformational potential stems from its ability to lend itself to a diverse range of applications across a range of sectors. One can witness AI based applications in traditional spheres of manufacturing, which are transforming quality control, production lines, and supply chain management, and in services, which are creating personalized product offerings and high-quality customer engagement. AI applications are also common in sectors such as agriculture that have taken a back seat in technological innovations in the post-industrial world. AI also demonstrates potential for impacting developmental challenges by responding to societies’ immediate demand for healthcare, education and expanding access to finance and banking.</p>
<p style="text-align: justify; ">The consequences of AI diffusion stem from AI’s pervasiveness across society, its ability to trigger innovation, and its tendencies to undergo transformation and evolution. These are typical characteristics of a class of technologies that can be found across history, the emergence and diffusion of which have enabled the wealth of nations. These are called General Purpose Technologies (GPT). Technologies such as steam engine, electricity, computers, semi-conductors, and more recently the Internet, can all be conceived as belonging to the GPT class of technologies. Our study is based on the understanding that the implications of AI can be best understood by viewing AI as a GPT.</p>
<p style="text-align: justify; ">Historically, the economic impacts of GPTs have not been immediate but follow after its diffusion across the economy, i.e. over a period of time. There are two reasons that explain this phenomenon: firstly, in early phases of technology diffusion, an economy diverts part of its resources from productive activities to costly activities aimed at enabling the GPT. For instance, organizations adopting computers must also invest in training employees or hire computer scientists, re-arrange production activities or organizational structures to accommodate computer driven work-flows, all of which are costly economic activities. Secondly, it is only after the GPT is diffused and widely used in the economy that the statistics measuring GDP start counting and fully measuring the GPT.</p>
<p style="text-align: justify; ">Empirical research on GPTs such as AI, including ours, means confronting the challenge of measurement. Estimates on the economic impact of AI are bound to be imprecise because data on AI’s adoption is not available or adequately reflected in the data used to compute economic growth, at least not yet. Measuring the economic impact of AI is also difficult because of the magnitude of indirect effects on productivity that GPTs trigger. It is not therefore uncommon that studies on GPTs, while attempting to estimate their economic impacts, also engage in in-depth case studies and historical analysis of its impacts.</p>
<p style="text-align: justify; ">Our findings show unambiguous and positive impacts of AI on firm level productivity across sectors, although there is variation in the magnitude of positive impacts across sectors. We complement our findings with case studies that cover different firms that are developing AI based applications across a range of sectors to understand the underlying firm-level capabilities that drive innovations in AI based applications. Our study leads us towards high-level policy challenges facing organizations, civil society and government, and which when addressed enable the full realization of economic growth triggered by AI.</p>
<p style="text-align: justify; ">However, our conclusions are a step-away from actionable policy recommendations. Given your experience with and within India’s AI based ecosystem, we invite you to deliberate and recommend insights and strategies that can help us arrive at concrete and practicable policy recommendations towards achieving a growth and welfare enhancing AI-based ecosystem in India.</p>
<p><b>Proposed Questions for Deliberation</b></p>
<ul>
<li><span>In which sectors do we observe an immediate opportunity for the adoption of AI? What could be the nature of these applications?</span></li>
<li>In which areas of AI development and application is there an immediate opportunity for governments, industry and academia to collaborate?</li>
<li>What should be the Government’s top five priorities in the next one year to catalyse the growth of AI in India?</li>
<li>How and what agencies of the Government should be involved in implementation of India’s National AI mission?</li>
<li>What aspects of the Government’s capacity requires enhancement to adapt to challenges of a growing Indian AI based ecosystem?</li>
<li>What measures can the Government take to regulate for AI safety and ethical use of AI?</li>
<li>What are the policy measures that the Government can undertake to safeguard against the consequences of AI based inequality?</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier'>https://cis-india.org/internet-governance/news/roundtable-discussion-on-201cthe-future-of-ai-policy-in-india201d-icrier</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-07-10T01:46:36ZNews ItemRethinking the intermediary liability regime in India
https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india
<b>The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December.
</b>
<p>The blog post by Torsha Sarkar was <a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">published by CyberBRICS</a> on August 12, 2019.</p>
<hr />
<h3 style="text-align: justify; ">Introduction</h3>
<p style="text-align: justify; ">In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.</p>
<p style="text-align: justify; ">The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft<a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf">version</a> of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.</p>
<p style="text-align: justify; ">A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters <a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank">reported</a> that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.</p>
<p style="text-align: justify; ">These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary<a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank">statement of reasons</a>.</p>
<p style="text-align: justify; ">While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.</p>
<p style="text-align: justify; ">On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including <a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank">Google</a>) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.</p>
<p style="text-align: justify; ">In March this year, Mark Zuckerberg wrote an<a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank">op-ed</a> for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.</p>
<p style="text-align: justify; ">That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious <a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank">NetzDG</a> aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.</p>
<p style="text-align: justify; ">In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.</p>
<p style="text-align: justify; ">Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.</p>
<h3 style="text-align: justify; ">Broad, thematic concerns with the Rules</h3>
<h3 style="text-align: justify; ">A uniform mechanism of compliance</h3>
<h3 style="text-align: justify; ">Timelines</h3>
<p style="text-align: justify; ">Rule 3(8) of the Guidelines mandates intermediaries, prompted by <em>a</em> <em>court order or a government notification</em>, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.</p>
<p style="text-align: justify; ">There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for <em>all</em> intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has <a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank">commented</a>, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can <a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank">regulation be made</a> with only the tech incumbents in mind. While platforms like YouTube can comfortably <a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank">remove</a> criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.</p>
<p style="text-align: justify; ">One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.</p>
<p style="text-align: justify; ">Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank">less heed</a> to the technical requirements or the correct legal procedures required for content takedown.</p>
<h3 style="text-align: justify; ">Utilization of ‘automated technology’</h3>
<p style="text-align: justify; ">Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping <a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank">100 million dollars investment by 2018</a>.</p>
<p style="text-align: justify; ">Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not <a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank">infallible</a>. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through <em>all</em> these categories of ‘unlawful content’ at one go.</p>
<h3 style="text-align: justify; ">The problems of AI</h3>
<p style="text-align: justify; ">Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.</p>
<p style="text-align: justify; "><a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank">Except, that is not how it should work</a>. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be<a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&httpsredir=1&article=1704&context=ndlr" rel="noreferrer noopener" target="_blank">understandable</a> by a machine.</p>
<p style="text-align: justify; ">Additionally, the training data used to feed the system <a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank">can be biased</a>. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.</p>
<p style="text-align: justify; ">Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘<a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank">black box</a>’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.</p>
<p style="text-align: justify; ">In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.</p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.</p>
<p style="text-align: justify; ">Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).</p>
<p style="text-align: justify; ">The <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank">UK White Paper on Harms</a>, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly<a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank">arbitrate</a> disputes regarding content removal, is finding traction in several parallel developments.</p>
<p style="text-align: justify; ">One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘<a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank">internet court</a>’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.</p>
<p style="text-align: justify; ">India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.</p>
<p style="text-align: justify; ">There is no<a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"> silver bullet</a> when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and <em>laissez-faire</em>autonomy, is worth considering.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</a>
</p>
No publishertorshaInternet GovernanceIntermediary LiabilityArtificial Intelligence2019-08-16T01:49:47ZBlog EntryResponsible AI Workshop
https://cis-india.org/internet-governance/news/responsible-ai-workshop
<b>Sunil Abraham participated in this meeting organized by Facebook on September 17, 2019 in New Delhi. </b>
<p><a class="external-link" href="http://cis-india.org/internet-governance/files/responsible-ai">Click to view the agenda</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/responsible-ai-workshop'>https://cis-india.org/internet-governance/news/responsible-ai-workshop</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-09-20T14:50:47ZNews ItemPracticing Feminist Principles
https://cis-india.org/raw/practicing-feminist-principles
<b>AI can serve to challenge social inequality and dismantle structures of power.</b>
<p style="text-align: justify; "><span>Artificial intelligence systems have been heralded as a tool to purge our systems of social biases, opinions, and behaviour, and produce ‘hard objectivity’. However, on the contrary, it has become evident that AI systems can sharpen inequalities and bias by hard coding it. If left unattended, automated decision-making can be dangerous and dystopian.</span></p>
<p style="text-align: justify; "><strong>However, when appropriated by feminists, AI can serve to challenge social inequality and dismantle structures of power. There are many routes to such appropriation – resisting authoritarian uses through movement-building and creating our own alternative systems that harness the strength of AI towards achieving social change.</strong></p>
<p style="text-align: justify; "><strong>Feminist principles can be a handy framework to understand and transform the impact of AI systems. Key principles include reflexivity, participation, intersectionality, and working towards structural change.</strong> When operationalised, these principles can be used to enhance the capacities of local actors and institutions working towards developmental goals. They can also be used to theoretically ground collective action against the use of AI systems by institutions of power.</p>
<p style="text-align: justify; "><strong>Reflexivity</strong> in the design and implementation of AI would imply a check on the privilege and power, or lack thereof, of the various stakeholders involved in an ecosystem. By being reflexive, designers can take steps to account for power hierarchies in the process of design. A popular example of the impact of power differentials is in national statistics. Collected largely by male surveyors speaking to male heads of households, national statistics can often undervalue or misrepresent women’s labour and health. See Data2x. “<a class="external-link" href="https://www.data4sdgs.org/sites/default/files/2017-09/Gender%20Data%20-%20Data4SDGs%20Toolbox%20Module.pdf">Gender Data: Sources, Gaps, and Measurement Opportunities</a>,” March 2017 and Statistics Division. “Gender, Statistics and Gender Indicators Developing a Regional Core Set of Gender Statistics and Indicators in Asia and the Pacific.” <a class="external-link" href="https://www.unescap.org/sites/default/files/Framework-and-Indicator-set.pdf">United Nations Economic and Social Commission for Asia and the Pacific, 2013</a>. <span>AI systems would need to be reflexive of such gaps and plan steps to mitigate them.</span></p>
<p style="text-align: justify; "><strong>Participation</strong> as a principle focuses on the process. A participatory process would account for the perspectives and lived experiences of various stakeholders, including those most impacted by its deployment. <strong>In the health ecosystem, for instance, this would include policymakers, public and private healthcare providers, frontline workers, and patients. A health information system with a bottom-up design would account for metrics of success determined by not just high-level organisations such as the World Health Organisation and national governments, but also by providers and frontline workers</strong>. Among other benefits, participation in designing AI systems also leads to buy-in and ownership of the technology right at the outset, promoting widespread adoption.</p>
<p style="text-align: justify; "><strong>Intersectionality</strong> calls for addressing the social difference in the datasets, design, and deployment of AI. <strong>Research across fields has shown the perpetuation of inequality based on gender, income, race, and other characteristics through AI that is based on biased datasets.</strong></p>
<p style="text-align: justify; ">The most critical principle is to ensure that AI systems are working to challenge inequality, including inequality perpetrated by patriarchal, racist, and capitalist systems. Aligning with feminist objectives means that systems that have objectives that do not align with feminist goals – such as those that enhance state capacities to surveil and police – would immediately be excluded. Systems that are designed to exclude and oppress will not work to further feminist goals, even if they integrate other progressive elements such as intersectional datasets or dynamic consent architecture (which would allow users to opt in and out easily).</p>
<p style="text-align: justify; ">We must work towards decreasing social inequality and achieve egalitarian outcomes in and through its practice. Thus, while explicitly feminist projects such as those that produce better datasets or advocate for participatory mechanisms are of course practicing this principle, I would argue that it is also practiced by any project that furthers feminist goals. Take for example AI projects that aim to reduce hate speech and misinformation online. Given that women and other marginalised groups are often at the receiving end of violence, such work can be classified as feminist even if it doesn’t actively target gender-based violence.</p>
<p style="text-align: justify; ">All technology is embedded in social relations. Practicing feminist principles in the design of AI only serves to account for these social relations and design better, more robust systems. <strong>Feminist practitioners can mobilise these to ensure a future of AI with inclusive, community-owned, participatory systems, combined with collective challenges to systems of domination.</strong></p>
<hr />
<h3>References</h3>
<p>Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14, no. 3 (1988): 575–99. https://doi.org/10.2307/3178066.</p>
<p>Link to the original article <a class="external-link" href="https://feministai.pubpub.org/pub/practicing-feminist-principles/release/1?readingCollection=c218d365">here</a></p>
<p>
For more details visit <a href='https://cis-india.org/raw/practicing-feminist-principles'>https://cis-india.org/raw/practicing-feminist-principles</a>
</p>
No publisherambikaGender, Welfare, and PrivacyCISRAWResearchers at WorkArtificial Intelligence2021-12-07T00:54:54ZBlog EntryPolicy Lab on Artificial Intelligence & Democracy
https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy
<b>Shweta Mohandas participated in a policy lab on Artificial Intelligence & Democracy in India organised by Tandem Research, in partnership with Microsoft Research and Friedrich-Ebert-Stiftung on 2 & 3 April, 2019, in Bangalore.
</b>
<p>
For more details visit <a href='https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy'>https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-04-12T01:32:32ZNews ItemPolicies for the Platform Economy
https://cis-india.org/internet-governance/news/policies-for-the-platform-economy
<b>Anubha Sinha and Amber Sinha will be panelists in this event being organized by IT for Change at India Habitat Centre in New Delhi on August 30, 2019. </b>
<p>The agenda for the event <a class="external-link" href="http://cis-india.org/internet-governance/files/agenda-for-policies-for-the-platform-economy">is here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/policies-for-the-platform-economy'>https://cis-india.org/internet-governance/news/policies-for-the-platform-economy</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-08-27T00:19:26ZNews Item