Skip to main content

The missing data: Reimagining AI Systems to Challenge Structural Silences

AI generated portraits in a grid with some missing
AI-generated image

This project aims to challenge the structural erasures embedded in AI by surfacing what is too often ignored: marginalised voices, invisible labour, and environmental harm. Rooted in intersectional feminist justice, it reimagines how AI systems are built—not as tools designed for communities, but with them. It aims to support more inclusive, accountable, and sustainable futures, where those most affected by technology help shape its development.

Project description

This project frames ”missing data” in AI as a structural problem rather than a mere technical oversight. Omissions in data reflect systemic erasures embedded in how AI systems are developed, deployed, and evaluated, shaping not only technical outcomes but also reinforcing broader patterns of inequality and harm.

The project aims to deliver exploratory tools, prototype frameworks, and community partnerships that lay the groundwork for a longer-term research agenda. It will develop a proof-of-concept large language model (LLM)-based toolkit for detecting patterns of omission, a beta version of a participatory storytelling platform co-designed with AI workers, and a draft framework for environmental impact assessment.

Research focus

The work centers on three interrelated and under-recognized dimensions of missing data and harm in AI systems.

Systemic silence

The project addresses the absence of data in domains shaped by intersecting systems of oppression, such as gender-based violence, reproductive health, unpaid care work, and racialised labor, areas where systemic silence undermines recognition and justice.

Invisible labour

It focuses on the invisibility of the labor that sustains AI systems, much of which is carried out under exploitative conditions by racialised and migrant women in the Global South, yet remains unacknowledged in mainstream narratives about AI.

Occlusion of AI’s environmental impacts

It tackles the occlusion of AI's environmental impacts, including extractive resource use, energy consumption, and ecological degradation, costs that disproportionately affect marginalized communities and are rarely accounted for in conventional sustainability metrics.

Researchers

Amir H. Payberah is an Associate Professor of Computer Science at KTH Royal Institute of Technology. He currently serves as the program director of the ICT doctoral program at EECS and leads the WASP cluster on Legal, Ethical, and Societal Aspects of AI. Amir's research focuses on the intersection of equity and justice in AI, particularly on large language models.

Amir Hossein Payberah
Amir Hossein Payberah associate professor

Lina Rahm is an Assistant Professor in the History of Media and Environment with specialisation in Artificial Intelligence and Autonomous Systems.

Lina Rahm
Lina Rahm assistant professor
Belongs to: InspireLab
Last changed: Jun 07, 2025