Practicing Feminist Principles
AI can serve to challenge social inequality and dismantle structures of power.
Artificial intelligence systems have been heralded as a tool to purge our systems of social biases, opinions, and behaviour, and produce ‘hard objectivity’. However, on the contrary, it has become evident that AI systems can sharpen inequalities and bias by hard coding it. If left unattended, automated decision-making can be dangerous and dystopian.
However, when appropriated by feminists, AI can serve to challenge social inequality and dismantle structures of power. There are many routes to such appropriation – resisting authoritarian uses through movement-building and creating our own alternative systems that harness the strength of AI towards achieving social change.
Feminist principles can be a handy framework to understand and transform the impact of AI systems. Key principles include reflexivity, participation, intersectionality, and working towards structural change. When operationalised, these principles can be used to enhance the capacities of local actors and institutions working towards developmental goals. They can also be used to theoretically ground collective action against the use of AI systems by institutions of power.
Reflexivity in the design and implementation of AI would imply a check on the privilege and power, or lack thereof, of the various stakeholders involved in an ecosystem. By being reflexive, designers can take steps to account for power hierarchies in the process of design. A popular example of the impact of power differentials is in national statistics. Collected largely by male surveyors speaking to male heads of households, national statistics can often undervalue or misrepresent women’s labour and health. See Data2x. “Gender Data: Sources, Gaps, and Measurement Opportunities,” March 2017 and Statistics Division. “Gender, Statistics and Gender Indicators Developing a Regional Core Set of Gender Statistics and Indicators in Asia and the Pacific.” United Nations Economic and Social Commission for Asia and the Pacific, 2013. AI systems would need to be reflexive of such gaps and plan steps to mitigate them.
Participation as a principle focuses on the process. A participatory process would account for the perspectives and lived experiences of various stakeholders, including those most impacted by its deployment. In the health ecosystem, for instance, this would include policymakers, public and private healthcare providers, frontline workers, and patients. A health information system with a bottom-up design would account for metrics of success determined by not just high-level organisations such as the World Health Organisation and national governments, but also by providers and frontline workers. Among other benefits, participation in designing AI systems also leads to buy-in and ownership of the technology right at the outset, promoting widespread adoption.
Intersectionality calls for addressing the social difference in the datasets, design, and deployment of AI. Research across fields has shown the perpetuation of inequality based on gender, income, race, and other characteristics through AI that is based on biased datasets.
The most critical principle is to ensure that AI systems are working to challenge inequality, including inequality perpetrated by patriarchal, racist, and capitalist systems. Aligning with feminist objectives means that systems that have objectives that do not align with feminist goals – such as those that enhance state capacities to surveil and police – would immediately be excluded. Systems that are designed to exclude and oppress will not work to further feminist goals, even if they integrate other progressive elements such as intersectional datasets or dynamic consent architecture (which would allow users to opt in and out easily).
We must work towards decreasing social inequality and achieve egalitarian outcomes in and through its practice. Thus, while explicitly feminist projects such as those that produce better datasets or advocate for participatory mechanisms are of course practicing this principle, I would argue that it is also practiced by any project that furthers feminist goals. Take for example AI projects that aim to reduce hate speech and misinformation online. Given that women and other marginalised groups are often at the receiving end of violence, such work can be classified as feminist even if it doesn’t actively target gender-based violence.
All technology is embedded in social relations. Practicing feminist principles in the design of AI only serves to account for these social relations and design better, more robust systems. Feminist practitioners can mobilise these to ensure a future of AI with inclusive, community-owned, participatory systems, combined with collective challenges to systems of domination.
References
Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14, no. 3 (1988): 575–99. https://doi.org/10.2307/3178066.
Link to the original article here